Although image formation is one of the most common topics of any optics course, the understanding of the different way to produce an image is not always easy. In the didactic practice, for instance, the discussion of image reconstruction is limited to ray tracing for mirrors and lenses and students are induced to believe that two or three light rays contain all the information about the object. In particular no mention is made to the fact that objects can be considered as collections of pointlike sources.1
What is actually never considered is true light propagation and sometimes optical systems are described as if only light sources or illuminated objects and screens exist and nobody takes care of what happens to the light in between the source and the screen.2 Actually, light from objects diffracts and the information that is contained in it changes its appearance during propagation.
The role of imaging systems, even the simplest one like pin-hole cameras, is to recover the information contained in the diffracted light to produce “images” of the object, and the obtained images can be very different in nature from each other.
Our didactic path is focused on image production and follows a route of increasing complexity from pin-hole cameras to holography. The idea is to link the characteristics of the image to the selection operated on propagating light by optical systems.
To clarify what we mean by following the path of light, let us first discuss a simple experiment we use to introduce the problem of vision to students. Let us consider the object in Fig. 1(a), which is a small plastic box containing transparent gel deco beads and water. Refractive index of beads is very close to that of water. The experiment consist in showing the container to students by preventing the light from passing through it from the back (we cover the back wall by hands or by a black cardboard): in this way the beads immersed in the water become invisible. On the contrary, when light is free to pass through the container, the beads can bee seen. This behavior can be explained by considering the path the light travels to the eye of the observer: in the first case the light enters the container from the front and is reflected back, while in the second case the light is mainly refracted inside the container. As the phenomena of reflection and refraction have different sensitivities to small changes in the refractive index, we can better distinguish beads from water by refraction than by reflection. The same applies to Pyrex glass beakers filled with glycerol (see Fig. 1 (b)) as their refractive index is very similar, the smaller beaker disappears.
The explanation of this simple experiment requires theory of reflection and refraction, but, most of all, it requires the awareness that when we say we see something, we need some light to enter our eyes, and that the information carried by it depend on the path it has followed.
The idea behind the didactic path is thus following light in its propagation from the object to the final image. The light is described as a superposition of very many plane waves propagating in all directions. Plane waves are the wave analogy to light rays, as they have a single propagation direction, and the idea that the light can be represented by a suitable combination of such waves is at the basis of Fourier analysis. Of course we do not mean to present the full Fourier theory to High School students, but the principles can be introduced in a very simple way by using goniometric functions.
Describing light as a superposition of waves allows us to identify the different kinds of information that can be encoded on light: wavelength (color), amplitude (intensity), wavevector (propagation direction, ray light) and phase.
We analyze different imaging systems to see what part of information is selected by each of them and hence what kind of image is produced.
Pin-hole cameras are the simplest way to realize images of illuminated objects (see Fig. 2). The principle, which is well known ever since the Middle Ages, is that the small hole selects only one direction for the light coming from each point of the object and the image is built by a perfect correspondence of each point of the object to each point of the screen. Among the characteristics of light coming from the object, pin-hole cameras only select color, intensity and a single direction per point. The result is a 2D image that is colored, faint (a small amount of light can pass through the small hole), sharper and sharper as the pin-hole size decreases, with no on-focus plane but simply scalable with distance. The image is flat as if it came from a flat object and, in some sense, we have no localization of the information about the object.
Lenses, together with mirrors, are the most commonly used devices to produce images, starting from the eye that includes a very sophisticated lens system. For this reason it is very important that the working principle of lenses is understood correctly.3
The use of lenses improves the brightness of the image, as more light is collected coming from any point of the object, together with color and intensity. Lenses produce 2D images that are colored, bright and go on-focus on a well defined plane. As for pin-hole cameras, there is a one-to-one correspondence between the points of the object and the points of the image in the image plane: we have a localization of the information.
Holography is an imaging technique that does not require lenses as is based on the registration on a photographic plate of the interference pattern produced by the light field diffracted by the object and a reference field.4 To register the interference, holography requires the use of laser light, that is monochromatic: a portion of the laser light is enlarged and used to illuminate the object and the rest is used as the reference field. Holography does not operate any selection of the field coming from the object (except for the selection of the size of the holographic plate) and thus records the entire information contained in the light. Of course, as laser light is monochromatic, the color of the object is not registered. The holographic plate must undergo photographic development and then can be re-illuminated with a laser to reproduce a field equal to that originally diffracted by the object. The resulting image is 3D, in the sense that we can look at the image as if it were the original object: it conserves the complete parallax.
We note that in a hologram the information about the points of the object is not localized but is spread all over the holographic plate, so that when a hologram is broken in pieces, all its fragments contain the information of the entire image, though seen from a different point of view (see Fig. 3).
Once we have understood that light carries information about illuminated objects and that a different selection of light characteristics can lead to very different images, we show that we can manipulate the information contained in the light so that the final image will appear dramatically different from the original one. Once again we act on the light during its propagation and we must be aware of the changes that happen along the way. In the following sections we discuss two possible experiments that demonstrate image manipulation
Limiting the aperture of a lens
We consider a simple setup in which a single lens makes the image of an illuminated transparency.5 The image is observed on a screen according to lens equations. We can ask students what happens if we partially cover the lens aperture with a cardboard so as to prevent more than half of the light from reaching the screen (see Fig. 4). We know that the large majority of students would answer that only half of the image will be visible on the screen. What actually happens is that the image can still be seen but fainter (see Fig. 5(a), (b) and (c)).
It is interesting to show that by shifting the cardboard closer to the screen a part of the image starts disappearing (see Fig. 5(d)) and completely disappears when the cardboard is very close to the screen. Once again this can be understood by considering that at the plane of the lens the information is spread all over the lens plane, and covering a part of it does not eliminate the possibility of making the image. On the other hand, the effect of the lens is to restore the one-to-one correspondence of the points in which light is detected with the points of the object (the correspondence is perfect at the image plane) so that on approaching the screen, covering a part of the light means actually eliminating a part of the image.
Spatial filtering in the Fourier plane
In the way between the lens and the screen in a system like that in Fig. 4 there is a special position where we can operate the selection of the propagating light: it is the focal plan of the lens. In fact, it can be demonstrated6 that the light distribution in this plane is the spatial Fourier transform of the light distribution at the object plane, that is a lens has the same effect of free propagation over very long distances. By putting a stop in the focal plane we can thus select some Fourier components, that is some of the plane waves that compose the light diffracted by the object. In Fig. 6(a) we show the magnified image of a 1 mm2 mesh made of ∼ 100μm holes obtained with a single-lens imaging system. In Fig. 6(b) and (c) we show the image as modified by putting a slit (600μm) in the focal plane of the lens that selects only the vertical (b) or horizontal (c) components of the Fourier spectrum: as the horizontal structures in the diffraction pattern depend on the vertical structures in the object, the selection results in the elimination of vertical or horizontal lines from the mesh. This procedure is called “spatial filtering” and is commonly used to eliminate noisy components from laser beams.
The result of spatial filtering is rather surprising and interesting for students because it demonstrate once more that the information is in the light but the way it is encoded at different propagation distances is not trivial.
Note that once understood the working principle of spatial filtering, the effect of a pin-hole camera can be revisited in this spirit: when objects are far away, the pattern of the diffracted light on a screen is the Fourier transform of the light at the object plane. So if we collect the light through a small hole we are making a spatial filtering of the light that only selects a part of the available information: what we lose are the small details that are carried by waves at high angles that are not collected by the hole.
DISCUSSION AND CONCLUSIONS
We proposed this didactic path to self-selected students participating in the activity “Photography and holography” (16 hours) within the Piano Nazionale Lauree Scientifiche (PLS) of the Italian Ministry of Education and partially to entire classes in the module “Vision and image formation” (2 hours) of LuNa Project.7 During the academic years from 2009/2010 to 2012/2013 we involved in the activity more than 200 students, mainly from scientifically oriented High Schools. The results at the end was a good level of understanding of the nature of images and of the differences among them.
The main difference of this approach with respect to the conventional one is that the process of light propagation is approached in its entire complexity and the simplifications are introduced by the optical systems and not by the physical description.
In this way the description is at first glance more complicated but in the end it will better and easier explain many interesting phenomena avoiding confusion.
The didactic path could be implemented in all school laboratories as it does not require expensive equipment: the only difficulty stays in hologram registration that requires a more technical setup.
The Authors acknowledge the Piano Nazionale Lauree Scientifiche (PLS) of the Italian Ministry of Education.