June 07, 2007
Project
Fluorescence microscopy is a popular and important imaging technique that has pretty much conquered the world of experimental biology. Among its many virtues is the fact that it is fairly non-destructive, meaning it can be used to view live samples in approximately physiological conditions. All kinds of sophisticated variations have been developed around it, but because it uses visible light to make the image it eventually runs up against a pretty hard constraint on just how much detail can be made out, known as the diffraction limit. Once you get down to a few hundred nanometres or so, things pretty much fuzz out and you just can't make out what's going on. This is pretty frustrating, because there's a lot happening down at that level and beyond, some of which is pretty important. Obviously there's no real bound to how much more detail biologists would like to have -- in the same way that you never have enough bandwidth or RAM1 or, you know, days left in your life -- but even just a bit more would be an improvement for a whole range of interesting cellular structures and behaviour. And while there are other imaging processes, such as electron microscopy and various scanning probe methods, that are capable of squinting down to those scales, most of them have to be done in destructive, non-physiological conditions; and/or they're surface bound, unable to see anything going on inside the cell in the way optical fluorescence microscopy can. So, ideally, we would like to be able to extend the resolution of the latter beyond that pesky diffraction limit. As it turns out, there are some ways to do just that, and the one at hand is called structured illumination microscopy -- because it works by imposing a structure on the light you shine on the subject. To understand how it works, we need to take a very quick detour into the frequency domain. We generally think of images in spatial terms: this area over here is dark, that one over there is stripy, whatever. However, it is also possible to think of them as the superposition of a lot of regular repeating patterns or waves. It may not be immediately obvious why you would want to do so, but it turns out to be very useful in all kinds of circumstances.2 So for any image in the spatial domain, defined in terms of positions, there is a corresponding image in the frequency domain, defined in terms of waves. The two representations are theoretically equivalent -- the spaces are reciprocal -- and you can switch back and forth between them with a certain amount of computational effort (and loss of precision). Small details in the spatial domain -- which are what we want our microscope to resolve -- correspond to high frequencies. What the diffraction limit in effect means is that there's only a certain region of the frequency domain that a beam of light can convey. Anything outside that region gets lost. If we want to capture more detail -- which lives outside that observable region -- we need somehow to smuggle it back in via the medium of coarse, low-frequency, observable features. The vehicle for doing so in structured illumination is moiré fringes.
1 Although nobody could ever need more than 640k, obviously.
2 Several popular compression techniques for images, sound and videos are based on this, for example.
3 If we could project arbitrarily fine patterns onto the sample, it would probably be possible -- although extremely laborious -- to reconstruct arbitrarily-high levels of detail. But we can't: the projected pattern is also constrained by the diffraction limit.
Posted by matt at June 7, 2007 10:42 PM
Comments
Something to say? Click here.