Translator Disclaimer
5 June 2009 Introducing biophotonics to undergraduates across the disciplines
Author Affiliations +
Proceedings Volume 9666, 11th Education and Training in Optics and Photonics Conference; 96660P (2009)
Event: Eleventh International Topical Meeting on Education and Training in Optics and Photonics, 2009, St. Asaph, United Kingdom
This paper describes our approach to introducing the basic principles of experimental Biophotonics to undergraduates. We have centered on optical microscopy since this is fundamental to most experimental activity associated with Biophotonics whether as a research, diagnostic or therapeutic tool. The major issues associated with imaging include spatial resolution, image enhancement and image interpretation. We have elected to guide students through the principles underlying these concepts by using three linked experimental investigations. The first deals with Fourier Optics and imaging at the fundamental level including the impact of such factors as numerical aperture, illumination wavelength and spatial filtering. The second is an introduction to optical microscopy including the use of digital image capture and basic image manipulation, whilst the third investigates image enhancement techniques such as the use of fluorescent labels and specifically tailored illumination techniques.


Over the past ten years Biophotonics, the integration of biology with photonics technology and methods, has established a significant place in academic and industrial research, clinical medicine, environmental science and biosensing. Consequently, Biophotonics is now beginning to find a place in both biological sciences courses and those dealing with conventional physical sciences and optics. Our objective in designing these experimental investigations has been to service both communities (through highlighting specific features for each sector) and also to facilitate the essential communication processes between the two complementary but essential scientific disciplines.

Whilst Biophotonics undeniably embraces a very wide range of concepts and experimental techniques we have selected optical microscopy from these as the absolute basic tool which underpins the vast majority of current activity in Biophotonics. There is of course much fundamental science which underpins the use of such microscopic tools including the interaction of light with matter, basic biology, notably at the cell level, and the more specific issues associated with the interaction of light with biological samples. Additionally we must consider the basic physics of light propagation (waves, interference and diffraction), light as a stream of photons and, for some of the more advanced work, the basics of non linear optics which underpins concepts such as fluorescence and Raman spectroscopy. Exploiting all of these tools does however inevitably involve some form or other of imaging and since much of this is associated with cell biology, microscopic imaging is clearly absolutely essential.

In a historical context optical imaging has always been crucial in developing an understanding of the living world. It was with the invention of the optical microscope about 400 years ago that the basic physical structure of life, the cell, was first seen and real biological research started to make rapid progress. Since then there has been a constant desire to image with ever higher resolution into more intact structures, thereby facilitating a true understanding of what is taking place at the sub-cellular level, and in turn how these interactions at the cellular level affect large scale changes within animals. In parallel with the advances in the understanding we have seen significant improvements in the understanding of the imaging process. Contributions from Ernst Abbe in defining the resolution limits and Zernike’s insights into wavefront propagation phenomena have been arguably the most important, though ‘new physics’ is currently beginning to push these concepts yet further.


There are three principal aspects to this topic:

  • The very basics including magnification resolution and aberrations.

  • Image enhancement techniques which can be implemented optically, principally spatial filtering.

  • The impact of illumination sources.

Magnification is treated simply as a recapitulation of straightforward imaging optics with in effect a reiteration of the simple thin lens law, enabling students to fall back on work that they would all originally have undertaken in their school career. We are thus building on common and well appreciated ground. At this stage the concepts of principal planes in complex lens structures and similar ideas can be neglected – indeed the conceptual underpinning required for the biological microscope can be reliably obtained through simple thin lens approximation though the lens designer will clearly need to take a different perspective. Figure 1 illustrates the basics of microscope magnification including the simple objective/eye piece basic system and the more common so called “tube lens” approach which includes the prospect for inserting other optical components between the objective lens and the tube lens in order to provide the facility for some types of image enhancement. We include both conventional and infinity corrected microscopy systems to ensure that in practical use students appreciate the difference.

Figure 1:

The basic imaging mechanisms in microscopy. Upper – the ‘finite tube” configuration and lower the “infinity corrected” microscope. The latter enables non focussing optical elements to be placed in the in finity space to facilitate image enhancement.


This conceptual underpinning of the straightforward microscope operation however needs further investigation. The essential principles can be gleaned from a thorough assessment of the image performance of a single lens and from this the definition of important image formation parameters. At this stage it should also be noted that the principles involved can be applied to any imaging modality that uses waves from MRI through to ultra-sound. This entails (figure 2) a combination of the concept of image formation involving the collection and reassembly of diffracted light (Fourier Optics) and approaching methods to define the impact of the lens itself on the fidelity of this process.

Figure 2:

Elements of the imaging process in a so-called ‘4f’ imaging system. The object diffracts in put light, red more than blue and this is collected by the in put lens. The diffracted light contains in formation about the structure of the image. The ‘transform plane’ focuses each diffracted component and in principle these components, representing specific image components, can be individually manipulated to perform image enhancement.


This in turn involves a simple examination of aberrations and the associated definitions of point spread functions. The objective here is to appreciate the next phase of imaging namely the factors which influence resolution and distortion in the final perceived representation of the original object. Figure 3 presents a simplistic representation of the principal concepts including the effects of numerical aperture and illumination wavelength, the impact of the various lens aberrations and the approach to integrating these tools into an image analysis technique.

Figure 3(a)

Spatial filtering through the effects of numerical aperture of the imaging lens


Figure 3(b)

Lens aberrations and the impact thereof on a the image of a point object. The sketches are idealised to illustrate the generic features.


Finally we turn to illumination and in particular the basic need for the illumination source and the required final image to be in effect placed an infinite distance apart. Other illumination options also need to be investigated including the implications of coherent illumination (as used to demonstrate spatial filtering effects but rarely useful in actual imaging applications outside confocal microscopy which is a form of spatial filtering). Other possibilities include the effect of changing the spectral distribution of the illumination and the (rarely used in biology though present in some medical imaging), and possibilities for the use for example of side illumination to highlight scattering phenomena. Finally, polarised light could also have a role to play and whilst linearly polarised systems are by far the most common, there are additional possibilities in the use, for example, of circular polarisation in the examination of some types of biological samples.

Spatial filtering is in principle a simple concept (figure 4) in that the use of typically a stop in the filtering plane can highlight specific features in the image plane. Spatial filtering is an extremely versatile tool which can be used to highlight abrupt edges, pick out particular periodicity in an image structure or extract specific complex features through effectively realising a matched filter in the spatial frequency plane. However, it is perhaps worth mentioning that, unlike many digitally implemented techniques, accurate spatial filtering requires the use of single wavelength illumination with the most notable exception of the three principal spatial filtering tools used in biological imaging, known as dark field, phase contrast and confocal microscopy.

Figure 4:

Illustrating the principles of spatial filtering in a 4 f system (figure 2) using a checkerboard input object and monochromatic light to highlight the diffraction pattern (Fourier transform) in the transform plane. The simple spatial filter selects one horizontal row in the transform pattern and results in the vertical bar image.


The principle underlying the first two of these three image processing tools is to minimise the impact of overall background on low contrast images, in effect taking away the average illumination which often can be much, much larger than the variations which contain the image detail. In the case of the confocal imaging system this background is from light being returned to the detector from outside the focal plane. In all cases the resulting image when viewed directly by eye or when captured electronically invariably demonstrates significantly higher contrast, and in the end it is contrast that forms the basis, or limit, of any imaging system. Dark field illumination is designed to operate with a more conventional intensity contrast images as for example shown in figure 5.

Figure 5:

Dark field imaging of a low contrast object. (a) the basic concept – a hollow illumination cone from the lamp via the condenser lens misses the objective unless diffracted by image structure. Consequently directly transmitted light through the image is eliminated. (b) an example image of the scatter from silver nanoparticles from - Directly illuminating this image would produce an intense white background against which the particles would be almost in visible.


Phase contrast, however, is designed to highlight images of varying optical thickness but through which the intensity transmitted (or indeed reflected) is essentially unchanged. Here the operation is more subtle and typically includes both a reduction (as opposed to a complete attenuation as in dark field) of the background illumination transmitted through the zero order in the Fourier transfer plane into high attenuation coupled to a carefully designed phase delay. The phase delay has obviously to be matched to a particular illumination wavelength. Consequently phase contrast imaging inherently relies upon relatively narrow band operation for its most successful realisation, but useful white light images can also be produced. However these exhibit ‘rainbow’ effects which need to be interpreted with care. We have omitted confocal imaging in the present series of experiments. However our overall approach to understanding the basic process involved in imaging should enable more advanced students to relate to both the need for and the principles of confocal systems.


The aim of this experimental investigation is to introduce familiarity with the basic specific features of biological microscopes and relate these where appropriate to the generic concepts developed earlier. Emphasis is on the core principles but more specific advanced topics, likely to be of importance to the more physically science minded students, are included as optional areas of interest.

By far the vast majority of biological samples are illuminated in transmission where the final image comprises the modulation characteristics on detected intensity (and by implication also colour) of light transmitted through the sample. Consequently illumination system plays a critical part defining the final image quality and thus the first major aspect to be investigated is the Koehler illumination (figure 6) a crucial feature very often overlooked. This includes roles of the various aperture planes in the illumination system and the techniques whereby the illumination source is effectively placed at infinity with respect to the object and therefore the final image. Consequently the student gains an important appreciation of the nomenclature used in practical microscope systems, and the effect of changes in the optical transmission characteristics of each of these planes. Another essential aspect of this is to recognise the routes through which the light source appears in the Fourier transform plane, linking back with the learning outcomes from the previous module. The consequences of this are to identify approaches through which Fourier transform filters can be realised within the source geometry in addition to directly within the more obvious (figure 2) Fourier planes.

Figure 6:


Illustrating the principles of the Koehler illumination system for transmission microscopy. Note that the object planes (and consequently the image planes thereof) and the lamp filament – or equivalent –planes are separated by infinity, and consequently the eye or camera sees only the image of the original object. Diagram from:

Producing the final image essentially follows the processes outlined previously but thereafter the resulting images are now invariably digitally acquired and stored as digital images. The artefacts of the acquisition and storage process and the interpretation of these artefacts are arguably among the least appreciated microscope phenomena. As an example the implications that images sampled on a regular point by point lattice (i.e. the pixels on a digital camera) can, under unfavourable circumstances, produce significant sampling effects somewhat akin to Moiré fringes. Such effects are considerably more likely when narrow wavelength sources are used.

We also examine the effect of different exposure times on the quality of the recorded image. The benefits of collecting one long exposure compared to averaging several shorter exposures are explored when using a digital camera. This area also explores the use of histograms to ensure that the best configuration for the storage of the data is made. Using this simple software tools ensures that the full dynamic range of the camera is used. We also examine the use of deliberately saturated images to bring out specific features, though the dangers of this are also highlighted. At this stage the student has now learnt how to set up the optical system, what the effect is of incorrect adjustment, and how to capture the best possible image.

However, once acquired, the image must be stored and the normal temptation (and frequently the default option) is to use conventional JPEG compression. However, like all compression systems the JPEG process inevitably removes some data, and hence useful information, from the original image. Whilst for normal visual perception purposes this may be perfectly satisfactory, there are numerous examples in biology where this can throw away critical information. Consequently storing the raw data, whilst memory hungry, is much preferable and the practical investigation of this loss of information is highlighted in the practical work. This area also highlights the difference between pure data and information. In the long term it is information that is required from microscopy and thus “throwing away” information early in the process is “criminal”. Indeed as biological microscopic imaging moves rapidly towards quantified imaging, as opposed to just producing a good image, this loss of information is becoming more crucial and will become increasingly so particularly as temporal information and particle tracking play an increasing role in biological understanding.

The image processing tools now available in standard software packages are immensely powerful and can be highly creative. This creativity is though a hazard since it does give the opportunity to create spurious artefacts and the experimental investigation which we have adopted pursues this through examples. Figure 7 illustrates some of these points. Here we have taken a standard image and passed it through software filters supplied n ImageJ, a freely available and much used Java based package originally developed by the NIH ( The specific filters used are described and it is clear to see what “new structures” can appear by injudicious use of computer manipulation. These are before one even considers the use of Paintshop and the like!

Figure 7:

This is an extreme case to highlight the potential misunderstandings which can arise through injudicious use of image enhancement. The image on the left is germinating pollen grains which are typically 20mm. In diameter (indoor geranium) on the plant stigma running down the style. The right hand enhanced image could be interpreted as showing very significantly enhanced germination activity.


The Open Microscopy Environment (OME) is gaining acceptance in many biology laboratories and addresses the crucial issue of digitally misinterpreting image files. Here using a server environment (which can be downloaded for free) raw image files are stored along with the associated meta-data (e.g. objective lens used, camera settings etc). When academic papers are the submitted the processed image may be submitted to the journal but a link is provided to the raw data enabling either the reviewer, or later the reader, to actually view the real and original image.


There are three principal features of the basic microscope which are commonly used to enhance its performance. In essence all are attempting to increase the contrast in the image between the features that are desired to be imaged and the unwanted background and noise.

The first involves modifying the illumination system essentially through changing the source itself. Many biological samples have optical properties which depend on polarisation and/or exhibit birefringence. Combining a polarised light source and light image acquisition through a polarisation analyzer can often provide significant image enhancement through relatively simple modifications to the basic structure. However, interpreting the enhanced image presents significant challenges. Through a series of simple experimental procedures the student is shown how, using some of the basic principles learnt in the earlier modules, the observed images can be interpreted.

Structured light typically in the form of an illuminating grid is a frequently used tool in non-contact profile measurement systems and this can be suitably adopted for microscopic samples. This again links back to the Fourier concepts introduced earlier. As an example if an intensity grid pattern (ideally consisting of a single sine wave) is projected onto the sample a single image can be recorded. If the grid is then moved by a third of the spatial period a second image can be recorded, and similarly for a final phase shift. If these images are just added directly one has a conventional wide field image.

By considering the optical planes at the focus of the objective and at the camera the image is looked at in “Fourier space”. The focal plane is the only one in which the frequency of the illumination (i.e. the grid frequency) is known and thus by selecting out this frequency, the blurred images from the light generated outside the optical plane are removed. The grid, with its known spatial frequency, can be removed from the obtained images by in turn erasing its frequency from the spectrum.

A number of mathematical solutions can be used to select out the grid frequency, but the computation is needed to be as simple as possible. Thus, a sectioned image, not containing the zero order frequency, may be derived by calculating


The effect of this can be seen in figure 8.


The use of structured light – effectively spatial frequency filtering in a recorded image, to produce sections though an image, again of pollen grain.


Alongside these technological advances there has been a revolution in the way that much of biology is now undertaken and this was recognised with the award of the 2008 Nobel Prize. Through genetic manipulation techniques it is possible to ensure that specific components of a cell, or organ, produce a fluorescent protein meaning that when the sample is illuminated with the correct wavelength of light only these areas of the cell will fluoresce. Thus it is now possible to examine a specific process within cells with a high degree of selectivity without the addition of extra chemical compounds and this is crucial for the biological interface of Biophotonics (Figure 9).

Figure 9

– examples of images using the ‘green protein’ fluorescent labelling technique


The basic principles of these markers, using standard samples rather than involving students in sample preparation, also feature in the experimental investigation. The base concepts of fluorescent imaging are introduced such as the correct excitation wavelength selection (LEDs), the operation of dichroic filters and the removal of the illumination wavelength from the final image. However, along with the advantages of fluorescent based imaging methods the complications are also demonstrated including the problem of photo-bleaching and that in punctuated labelled samples, only the labelled features show which can lead to a miss-representation of the sample structure.

The final area explored is that of fluorescent lifetime imaging. This has rapidly established itself as a powerful tool in all areas of biophotonics from clinical tissue diagnosis through to the measurement of protein/protein interactions. We present this important technique through the concept of pulsed illumination and/or variable frequency sinusoidal illumination applied as a stroboscope to slow down cyclical processes. The more normal method is through high speed detectors and complex electronics. However, by using a slow detector (conventional CCD camera) and a pulsed LED of variable frequency, it is possible to determine fluorescent lifetimes, and crucially changes in fluorescent lifetimes. Thus the student also learns to apply simple concepts to highly complex problems.

Finally there is an expanding range of innovative software tools available for manipulating microscopic imaging. Using basic concepts, the principles behind particle tracking are illustrated showing that for regular spherical particles this is a fairly straight forward process but that with objects that change shape as they move a significantly more complex approach is required. The aim here is not to explain or teach complex software routines but to ensure that the student always has a questioning approach to image analysis and quantification. With the rapid growth of quantified imaging this, along with the complications of image compression discussed earlier, is an important point for students in all branches of science using imaging. Methods of Fourier image analysis are also examined, linking what is possible on a computer, with the earliest work undertaken on real optical systems. Again the emphasis is to be aware of the possible complications in the context of the benefits of such complex manipulation.


The series of three experimental investigations serves to underpin the basic understanding of the functions and operation of biological imaging systems in general and microscopes in particular. Microscopy is arguably the most basic enabler for all biophotonic systems and as such will feature strongly in the emerging portfolio of undergraduate and postgraduate courses in the subject. Of course there are other important and versatile tools through which biological imaging can be yet further enhanced. In particular these include spectroscopy and spectroscopic analysis, hyper-spectral imaging and optical techniques for biosensing. We plan to add basic investigations into these subjects within the foreseeable future.


Some general texts on biphotonics and microscopy include:


Paras N. Prasad, “Introduction to Biophotonics” Wiley Interscience, 2003 pp593Google Scholar


Joseph W Goodman, “Introduction to Fourier Optics”, Third Edition McGraw Hill 2005 pp491Google Scholar


The green fluorescent protein derived initially derived from jellyfish won the discoveres the 2008 Nobel Prize for chemistry for Osamu Shimomura, Martin Chalfie, and Roger Y. Tsien. Here are many accounts of this, typified by the Nobel Proze web site at Scholar


There are many more detailed introductions to microscopy, for example: Douglas B. Murphy “Fundamentals of Light Microscopy and Electronic Imaging” John Wiley 2001.Google Scholar

© (2009) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
John Girkin, Brian Culshaw, Iain Mauchline, and Doug Walsh "Introducing biophotonics to undergraduates across the disciplines", Proc. SPIE 9666, 11th Education and Training in Optics and Photonics Conference, 96660P (5 June 2009);


Back to Top