March 26, 2007

Case Notes 12

As may have been glancingly alluded to before, the human visual system is really pretty slick when it comes to reconstructing the universe from a noisy optical input channel. We look. We see. We build internal models of the external world from the data provided by our senses; and we then interpret the world on the basis of our models. Thus, at the most fundamental organic level, as human beings, we are scientists.

The neurological and cognitive processes by which this happens are not well understood. There is reasonable information on the ins and outs of various implicated subsystems, but exactly how all that gets integrated into a perceived scene, and how the scene in turn feeds back to the lower levels, is the subject of much speculation. It's a matter of attempting to model the modelling.

One thing that helps in this endeavour is to look at edge cases, where the system either breaks down or else manages interestingly not to. The brain is extremely good at papering over its own cracks, but sometimes that interior sleight of hand becomes -- or can be made -- apparent.

A famous example is the blind spot: a significant patch of the visual field of each eye, around the point from which the optic nerve heads off to the brain, has no light receptors. The literal content of those patches is simply not there. And yet, we are never directly aware of any absence: we don't -- can't -- see these gaping holes in our field of view. The blind spots don't overlap, so when both eyes are open the brain can fill in the gap from each side by using the information from the other. With only one eye open, the filling in is more a kind of guesswork, based on the surroundings. It's pretty clever -- it will fill in complex patterns, for example -- but nevertheless sometimes entirely wrong. It relies on congruity: if what's in the blind spot doesn't match its surroundings, it won't just not be seen, it will be seen not to be there.

In fact, the brain is filling stuff in all the time. Looking around -- at least when sober -- you generally have the impression of a crystal clear panorama in which everything is present in extraordinary detail. You may not be able to make out the minutiae on the more peripheral items, but you can rest assured they will be there should you turn your attention to them. At some level, of course, we are aware that we don't see things as clearly on the edges as at the centre, but that doesn't make the experience any less seamless. All the stuff we're not actually seeing clearly, we still imagine to be clear. Just as with the blind spot, the brain covers up its covering up.

While blind spots are universal, other visual failures may occur on an individual basis. Damage to a portion of the retina or the nerves feeding from it can lead to scotoma, which is just another kind of blind spot, but one that hasn't been built in from the start. These too will commonly be completed by the visual perceptual system, especially if they are monocular, and often pass unnoticed unless specifically looked for. Yet again, you typically can't see what you can't see.

With significant binocular loss of visual field, as occurs often in the elderly via age-related macular degeneration (ARMD), things can get trickier. One possible consequence of the resulting loss of visual acuity is Charles Bonnet Syndrome, in which the patient experiences hallucinations within the absent portion of their visual field.1

Two general classes of hallucination are observed: simple, meaning flashes or geometrical patterns in the visual field, and complex, where the hallucinated entities are fully-formed objects such as people or animals, usually located in the scene in a contextually plausible way.2 The distinction may sometimes be quite murky, given the subjective nature of hallucination: all there is to go on is the patient's own description. Although models for either kind of hallucination are far from settled, it seems likely that the two classes arise through different mechanisms. In particular, simple hallucinations probably result from bottom-up processes, which analyse the input signals in various mechanistic ways to identify things like edges, orientations, surfaces and colours; while complex hallucinations probably involve top-down processes, by which the constituents of a visual scene are organised into conceptual units and interpreted on the basis of some set of beliefs about what is being seen.

Some fairly concrete models exist for bottom-up visual processing, and for the corresponding simple hallucinations. There are banks of neurons in different sections of the visual cortex that respond to basic elements in the observed image, and various circumstances -- say, changes in the relative activity of different neurotransmitters (as may be caused by psychedelic drugs) or hypersensitisation from loss of normal input (in CBS) -- may cause these banks to trigger abnormally. Patterns of neural firing in these areas will be perceived as (different, but related) patterns within the visual field, and many of the sorts of patterns that are commonly reported by patients can be nicely reproduced by the mathematics of dynamic systems and the geometries of our sensory apparatus.3

Modelling complex hallucinations is trickier, because our understanding of the higher level cognitive processes likely to be involved tends to be rather vague and hand-waving. It's difficult to characterise the top-down effects of, say, consciousness on the visual system when we don't have a convincing model of what consciousness is, nor of what the the upper levels of visual processing -- with which it presumably interacts -- are doing. So although it seems intuitively likely that the organic causes may be quite similar -- abnormal firing patterns in particular functional groupings of neurons -- we don't currently have any way of quantitatively representing those causes and their perceptual consequences.

As an aside, estimates of the prevalence of CBS vary widely. This is partly down to an uncertainty of definition, but also because people are often extremely reluctant to report hallucinations, for fear of being considered insane and institutionalised. CBS hallucinations are, apparently, usually not especially upsetting in themselves, but the idea of losing one's mind can be very scary indeed.


1 CBS is imperfectly defined, with different sources applying different criteria, not always requiring associated eye damage. It usually means visual hallucinations where the patient is aware that the hallucinations aren't real, and where there is no cognitive impairment -- which is to say, the hallucinator isn't insane. Such distinctions can, of course, be somewhat subjective.
2 This is true for hallucinations in general, not just for Charles Bonnet. The distribution of the two classes varies, though, between, say, schizophrenia or dementia and CBS.
3 Which is to say, the particular way the eye bone's connected to the brain bone. Now hear the word of the lord!

Posted by matt at March 26, 2007 10:37 AM
Comments

It may be aimed too much at the layman for your taste, but you might consider checking out V. S. Ramachandran's Phantoms in the Bran. Well written and fascinating at least to the uninitiated. A chapter on some of the things you discuss here; chapters also on phantom pain, Capgras' Syndrome, etc.

Posted by: Faustus, M.D. at March 26, 2007 01:04 PM

Phantoms in the Bran sounds like some really dodgy horror movie, possibly a Children of the Corn spoof.

(I know, I could have just silently fixed the typo, but why miss the opportunity for a cheap joke? At least I resisted any mention of cereal killers. Oh, wait...)

Ramachandran is cited by some of the papers I'm reading at the moment, so if I have time I may give it a look. Deadline looms, though.

Posted by: matt at March 26, 2007 01:35 PM

Phantoms In The Bran III, with Sarah Michelle Gellar, is actually my favorite of the series.

Posted by: Robin at March 26, 2007 03:25 PM

Oh, my dear God.

I have to go kill myself now.

Posted by: Faustus, M.D. at March 26, 2007 11:06 PM

Something to say? Click here.