The use of digital images to demonstrate basic concepts in physics was initiated in the nineties and has been increased in the last years because of the low cost of digital cameras and the development of software tools (both commercial and shareware) that ease the image processing.1, 2 In teaching optics, the use of digital images has been previously reported3–5 and it offers the possibility of analysing some phenomena quantitatively, which would be very difficult to do with the traditional equipment available in teaching labs. However, the success of the experience depends on the possibility of obtaining numerical values from the images and it largely depends on their quality and characteristics.
This communication presents a detailed analysis of some practical aspects concerning the image acquisition process and its influence on the success of different experiments. The experiments cover the usual topics in an undergraduate optics lab; i.e. both geometrical and physical optics, including spectral analysis of the light. We have implemented them so that they can be easily studied in a more quantitative manner, using the advantages of digitized images. Moreover, examples of quantitative data analysis are discussed, which demonstrates the high degree of accuracy that can be achieved with simple equipment in a teaching lab.
In experiments which require the measurement of length, it is necessary to include a ruler or an object of known length for the conversion between length and number of pixels in the digital image. There are several ways to perform this transformation. A straightforward way is to use a graphics program, such as MS Paint.6 By placing the mouse on the reference scale, it is simple to convert from pixel coordinates to real dimensions. There are also several commercial programs as Matlab®, VideoPoint®7, 8 and shareware as Phyton, PixelProfile and others9–11 that can directly convert the pixel coordinates of the picture into real coordinates. Alternatively, it is possible to write a spreadsheet program so that by clicking on a digital image imported into the program, the pixel coordinate will be given in the spreadsheet. The selection of the tool will depend on the type and depth of the analysis required and on the programming skills of the students.
First of all, a number of aspects concerning the camera should be discussed in order to improve the image quality. The camera should allow us to select the exposure, aperture and focus. In some cases, it will be possible to use the automatic mode but, owing to the low levels of illumination in optics experiences, it could be necessary to use the manual mode. In physical optics experiences such as diffraction, the intensity pattern depicts a non-uniform distribution. If the camera presents an autosetting in the exposure time, the relative intensities of the peaks are correctly represented only for lateral side maxima, while the bright central peak results automatically reduced in intensity below the sensor saturation threshold.12 On the contrary, an ad-hoc adjustment of the time exposure allows us to obtain images where many side peaks are correctly captured.
Other practical aspects that must be taken into account concern either lighting or the background. With regards to lighting, it should be tenuous enough to handle the camera but not to disturb the light from the experiment. Fig. 1a shows an image of light rays emerging from a lens with ambient light and smoke as dispersive media. Fig. 1b depicts a scope of the same experiment performed in full darkness. As it can be appreciated, the ambient light reduces the dispersion caused by the lens and improves the image. Regarding the background, it is convenient to use of a soft, uniform tone in contrast with the object and the colour of the light source used; although in the case of physical optics, the use of a background it is not necessary because the image will be focused on the pattern.
Experiments in geometrical optics are mainly concerned with the study of phenomena and laws of reflection and refraction, determination of the focal length of mirrors and lenses or the image-forming in mirrors. These experiments involve the trace of light rays paths. Materials used comprise acrylic lens, metallic mirrors of large size and incandescent lamps with multiple slits to obtain collimated beams. In certain cases, it is convenient the use of coloured filters for distinguish the different rays as it can be seen in Fig. 2. To select the most adequate conditions for image acquisition of the ray paths, preliminary tests should be accomplished, although, in some cases, the images could be improved by photo editing software. Fig. 3a and Fig. 3b show the same image before and after shine adjustment.
Tracing light rays and subsequent analysis can be done using software tools with different levels of automation. VideoPoint®is a friendly tool which allows us to extract the point coordinates of the light rays. The intersection of the ray paths can be calculated by using a spreadsheet. Moreover, by using Matlab®functions not only coordinates could be determined but also calculations to trace the rays and to determine the characteristics of the optical element could be accomplished, as it is shown in Fig. 4a and Fig. 4b.
As it has been mentioned above, one of the problems in the use of digital cameras is the saturation of the pixel signals. Some authors have alleviated this problem by using filters to reduce the intensity of the light reaching the CCD. However, in cases such as diffraction phenomena, there are dramatic differences between the intense central spot and secondary maxima and thus, the use of filters can lead to a loss of information about the less intense areas and makes it difficult to perform quantitative analysis.
In a previous work, an alternative method was proposed.13 It is based on the combination, pixel by pixel, of a series of images with exposure times that are adequate to measure the different intensity levels in the diffraction pattern. Most digital cameras record images in a JPEG file, which can be read by photo editing software, which works with the 24 bits in RGB format or 8 bits in grey-scale. It is possible to enhance the range of intensities over the 8 bits by combining the different images and thus, obtain the intensity profile of the complete diffraction pattern. The resultant image contains the intensity of each pixel corresponding to the picture with higher exposure without saturation.
Measurements for single slits, double slits and circular holes were recorded and the intensity profiles fitted to theoretical curves. Fig. 5 and Fig. 6 depict experimental data and the fit to Fraunhofer approximation curves for a double slit experiment. Fig. 7 shows a similar procedure for a circular hole.
By following this procedure, students can quickly and routinely record diffraction patterns. Simple calibrations allow them to convert pixel numbers to position values with a high degree of accuracy. This technique can also be used for other photometric applications.
Digital cameras can be also used to analyse the spectra of light. The objective of this experiment is to measure the wavelength of some of the prominent spectral lines of both known and unknown elements by means of a grating spectrometer. In our case, a grating spectrometer was arranged by using a single slit which acts as a collimator to produce a parallel beam of light from the source, a grating to diffract the light and a glass diffuser where the spectra is projected. The camera is focused on the glass to record the spectra. Different spectral lamps and discharge tubes can be used.
First of all, it is necessary to calibrate the spectrometer using known wavelengths to determine the relationship between wavelength and pixel number. Fig. 8e depicts the calibration curve obtained by using the emission lines of Hg lamps (435.8 nm, 546.1 nm and 578.2 nm) and Na lamps (589.3 nm).
Once the spectrometer is calibrated, it is possible to identify the emission spectra of different elements. In Fig. 9a spectral lines of a hydrogen tube are shown. The wavelength window ranges 400.98 nm to 666.16 nm. The energy levels of the spectral lines observed correspond to transitions in the Balmer series, n = 3 → n = 2, and n = 4 → n = 2, as it is shown in Fig. 9b.
The use of digital imaging and computational techniques opens a wide range of possibilities to improve optical teaching laboratories. In order to obtain high quality teaching material, images must present certain requirements. A number of points involving lighting, scales or camera settings among others should be taken into account during the process of image acquisition. Special attention requires the phenomenon of saturation of the pixels in the CCD that mainly affects diffraction experiments.
In this work, we have presented different optical experiments that can be easily studied in a more quantitative manner with the aid of a digital camera. These experiments cover the usual topics at undergraduate level, ranging from elementary topics in ray theory for light to more advanced subjects as Fraunhofer diffraction or spectral analysis. However in all cases the analysis can be done with high degree of accuracy owing to the use of digital images and adequate software tools.
This unnumbered section is used to identify those who have aided the authors in understanding or accomplishing the work presented and to acknowledge sources of funding.
Elliott, K. H. and Mayhew, C. A., “The use of commercial ccd cameras as linear detectors in the physics undergraduate teaching laboratory,” Eur.J.Phys. 19(2), 107–117 (1998).Google Scholar
Gil, S., Reisin, H. D., and Rodriguez, E. E., “Using a digital camera as a measuring device,” Am. J. Phys. 74(9), 768–775 (2006).Google Scholar
Wein, G. R., “A video technique for the quantitative analysis of the poisson spot and other diffraction patterns,” Am. J. Phys. 67(3), 236–240 (1999).Google Scholar
Deizarra, C. and Vallee, O., “On the use of linear ccd image sensors in optics experiments,” Am. J. Phys. 62(4), 357–361 (1994).Google Scholar
Vannoni, M. and Molesini, G., “Speckle interferometry experiments with a digital photocamera,” Am. J. Phys. 72(7), 906–909 (2004).Google Scholar
“Microsoft paint overview,” http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/mspaint_overview.mspx?mfr=true, (accessed 15-jun-2013).Google Scholar
“Matlab overview,” http://www.mathworks.com/products/matlab/index.html, (accessed 15-jun-2013)Google Scholar
“Pixel profile web page,” http://www.efg2.com/Lab/ImageProcessing/PixelProfile.htm, (accessed 15-jun-2013)Google Scholar
Rossi, M., Gratton, L. M., and Oss, S., “Bringing the Digital Camera to the Physics Lab,” The Physics Teacher 51, 141–143 (2013).Google Scholar
Ramil, A., Lopez, A. J., and Vincitorio, F., “Improvements in the analysis of diffraction phenomena by means of digital images,” Am. J. Phys. 75(11), 999–1002 (2007).Google Scholar