Except I shall see in his hands the print of the nails, and put my finger into the print of the nails, and thrust my hand into his side, I will not believe – John, 20:25, The Holy Bible, King James Version
The
phrase ‘Doubting Thomas’
– a person who refuses to believe without direct, personal experience – has its
origin in the first of two stories about the apostle Thomas that are concerned
the relationship between belief and evidence in the context of contentious
claims. I think it’s interesting to juxtapose the two stories, because Thomas
is on opposite ends of the moral in each story. In John 20:
24-29, we are told that Thomas – who was absent when Jesus was first seen
after the resurrection – refuses to believe without the evidence of his own
eyes. When Jesus appears to the assembled disciples eight days later, Thomas –
his demand for direct evidence met – proclaims his faith in the risen lord. The
second story is the
assumption of Mary which has it that at the end of her life Mother Mary was
taken up, body and soul, into heaven.
Some traditions also have it that Thomas – magically
transported from India for the purpose – was the only
witness to this. Inverting their earlier roles in the resurrection story, the
other apostles are skeptical of Thomas’s claims until they’re convinced by the
physical evidence of an empty tomb and a girdle Mary has left behind.
![]() |
| RIGHT: The Incredulity of Saint Thomas, by Caravaggio. LEFT: A depiction of the assumption of Mary, showing Thomas the Apostle receiving the girdle |
I’m aware that as a document the bible can be understood –
historically, linguistically and as a piece of literature - as belonging to a particular
context with its own conventions. However, I’m neither a believer nor a bible
scholar, so my own perspective is the only one I have to gage my reactions to the
text and the stories it contains. In that context, I find it odd that the
apostles – so impressed with Jesus that they follow him and then, after his
death, preach his Good News – seem so block-headed when they hear of new
evidence of the miraculous in Him and His family. A partial explanation might
be that these stories, while being presented as the testimony of people that
were there, are also intended to have an impact on people far removed from these
events in both time and space. That these incredible events were witnessed, and
convincing even to those admonished by Jesus himself as ‘slow to believe’
seems designed to be…well, convincing.
Similarly, Paul in
Corinthians, stresses the strength of witness to the resurrection: Jesus
appeared to Peter, to the rest of the Twelve, to five hundred other brethren. And
because the raw fact isn’t the only thing that matters – the meaning of the event being as important or more – Paul leaves us in no uncertain terms:
And if Christ be not raised, your faith is vain; ye are yet in your sins.
Then they also which are fallen asleep in Christ are perished.
If in this life only we have hope in Christ, we are of all men most miserable.
In my
last post, I used some quirky neurology to illustrate a point. For the sake
of brevity and flow I acknowledged - in passing - the caveats that have to go
along with any discussion of localized ‘functions’ in the brain, without
getting into specifics. While direct evidence is a seemingly obvious gold
standard for belief, in the context of modern science ‘seeing is believing’ can
be a tricky concept: the observer is always several technological and
methodological steps removed from the object of study. That can be either because the
phenomena of interested occurs on spatiotemporal scales far removed from that
of mere humans – on a range from the infinitesimal to the immense – or, such as
in the context of brain science, becuase it’s hard to get direct evidence without breaking the very thing that piqued your
interest in the first place. That said, I’d like to share some of the progress that’s
been made in the visual system as probably the best example of the progress that’s
been possible under the loose assumption that mind – as we experience it – has something to do with that walnut of an
organ sitting in your skull.
![]() | ||
| TOP LEFT: retinal ganglion receptive field. TOP RIGHT: simple cell receptive field. BOTTOM LEFT: orientation column. BOTTOM RIGHT: Reconstruction of visual experience from brain activity. |
Optic nerve fibres, which bear information from the retina
to the brain, accept inputs from specific retinal cells in particular ways that
define their ‘receptive
fields’. For example, on the top left of the picture above, an ‘ON centre’
cell is excited by light that falls on retinal cells in the centre of its
receptive field, but inhibited by light that falls on the periphery. This makes
it a ‘point detector’. So-called ‘simple cells’ in primary visual cortex collect
these inputs to form their own receptive fields: the receptive fields of the ON
centre cells line up in such a way that that simple cells are most responsive
to lines of a specific orientation
(top right). There are also ‘complex’ cells, that are additionally selective
for motion, and ‘hypercomplex’ cells that are selective for orientation, motion,
direction and length. A higher level of organization is the orientation column,
a region of the cortex whose neurons respond to line stimuli of varying angles
in an organised way: as you go along the column the preferred angle changes in
a gradual way (bottom left, preferred orientations in green). What I’m trying
to suggest is that the organisation of the way neurons react to stimuli –
visual in this case, but it could be anything including ‘internal’ stimuli – is
non-random and highly organised. And this organised reactivity, combined with a
lot of other knowledge about physiology, genetics, molecular dynamics and so on
allows us to build partial, but productive, models of how the physical stuff of
the brain can bear a disciplined relationship to the features of our experience
as conscious creatures. You might object that we don’t experience the world in
terms of free-floating edges and lines, but as an integrated whole. But studies that
take advantage of ‘multistable’
perceptual phenomena like the
Necker cube can start to pry into subjective
experience, because the perception
changes over time despite the fact that the stimuli are static. It’s a hop,
skip and a jump through many other layers of complexity: but the extent to which
our ability to explore the links between brain activity and mental phenomena can
be exemplified by the picture on the bottom right. The ‘in focus’ picture is a clip
from a movie that was presented to volunteers in an fMRI experiment, the ‘blurred’
picture is a reconstruction of the seen object from the activity detected by
the MRI machine. Not mind reading, but also clearly not just noise either.
Blessed are those who have not seen and yet have believed.
Except, of course, it’s not that simple. I’ve
been trying to lead you up the garden path, a little. A large part of the work
that been done in ‘single unit’ studies - direct electrical recording from
cells – was performed with simple stimuli like gratings and edges and so on,
not to mention the fact the data was not collected in humans. While the
findings have been replicated and extended massively, what has actually been
established is that the ‘optimal stimuli’ are the ones we have thought to present and for which we get the best response.
We can’t know, a priori, what the ‘best’
stimuli are from the perspective of the neurons themselves. Even the impressive
reconstructions from fMRI data are the results of contrived situations, were
decisions are made about what stimuli to present and the way in which the
algorithms that do the decoding are trained. This is deeper than a
methodological or technical issue: we, as people, have conventions about the
way we categorize phenomena, and these categories may or may not have much
similarity to ‘true’ categories in the phenomena themselves. I once had a
conversation with a friend about the fascinating complexity of embryo
development: for them, the sheer complexity was a point against the science.
The issue is that the phenomena themselves are not complicated; if they do
anything at all they just do what they do. The complexity comes from trying to
fit a description of that ‘doing’ into our puny minds that have left unprepared
– by evolution or design – to find these things not easily comprehensible. People
are usually happy to accept ‘vision’ as a valid category and don’t object to
the idea of a ‘visual system’. But what about ‘high level’ categories we create
for events in our mental lives: emotion, judgement, decision, valuation,
consciousness. To what extent can we be sure that the ‘systems’ we see in brain
activity are a function of the brain itself or, in contrast, the categories we
choose to split phenomena into and by extension the questions we choose to ask.
There is no such thing as a ‘valueless’ fact, because every
fact is embedded within the perspectives that motivated finding it, and is interpreted
in the context of a web of other findings,
models and axioms. How much of a problem this is depends on your
perspective. Firstly, this is emphatically not
a religion-versus-science issue, because as
I’ve pointed out before, to get anywhere beyond crippling solipsism - we all have to
concede that meaning might, in principle, be extracted from patterns in phenomena.
If no-one can assert anything at all because no assertion is valueless, we’re
all a bit buggered. Secondly, if you view each and every ‘scientific’ pronouncement
as an objective, timeless, statement about the world out there, it is a problem
– we’re limited beings, and there’s truth to the idea that all we ever really
experience is our own minds. However, I would argue that if you see science as
a process this is less of a problem. In this context, it’s fine that no single
person can see the world except but through their own mental blinders, because
– with luck – each
person has a different set of blinders. This is why one should be wary of
making pronouncements about the certitude of given factual canons, but can have
some faith in scientific method as a process for evaluating the relative merits of competing ideas. In
the context of brain science the assertion that the mind is in – in some
senses, at least - synonymous with the brain is an approach, not an established
fact: an approach that has been extremely fruitful, and one I would suggest has
yet to exhaust its usefulness.
![]() |
|
LEFT: Cajal’s neuronal forest. RIGHT:
Olfactory bulb of a dog, by Golgi.
|
Of course, this means there is a problem is when
we all share the same blinkers. In this case we’re at the mercy of fate: we
have to hope to be nudged out of our complacency by genius, circumstance or
novelty. Sometimes, all three. One of the foundation stones of the modern
understanding of brain physiology and function is something called The Neuron Doctrine.
It’s a heavy sounding name for something quite simple: neurons are cells –
individual units - like those found elsewhere in the body. This innocuous
seeming idea was a big deal in 1890s because of the prevailing wisdom at the
time. The standing model for nervous physiology was reticular theory,
which held that everything in the nervous system was one continuous, unbroken
interlinked mass. The interesting thing is that the decisive work that toppled
reticular theory, performed by the Spanish histologist Santiago Ramón
y Cajal, was undertaken with something called the Golgi method which was
developed by the greatest defender of reticular theory, Camillo Golgi. The two
men had access to the same methods, the same facts, and both were accomplished
scientific artists after the fashion of a time without advanced imaging
technology (see above). But, due to the preparation of circumstance and a
different perspective, Cajal was able to see
the significance of something that, for Golgi, was extraneous detail.
Undoubtedly, only artists devote themselves to science…




No comments:
Post a Comment