Learning wave optics requires strong abstraction ability and scientific reasoning to understand wave properties and construct mental images and schemes. Students often do not recognize the critical role of reasoning, nor understand what constitutes an explanation in physics. It is the reason why teaching by telling is an ineffective mode of instruction for most students . Also, experienced instructors know that there is a gap between what they teach and what is learned. Currently, many students do not easily understand theoretical physical models. These students are not able to develop a coherent framework for important optical concepts, despite having finished their introductory physics studies. Connections among concepts, formal representations, and the real world are often lacking after traditional instruction .
For this reason, wave optics courses are advantageously supported by experiments during lab work, offering the opportunity to observe the result of the optical effects. Thus, the students can observe phenomena such as interferences and diffraction patterns as well as polarization effects by the naked eyes.
Many previous works in didactic of physics [2, 3] indicated that observations during learning processes let students draw inferences and therefore enhance understanding only if the students could find a relevant guidance directly in their learning environment. In optical lab work, students do not easily construct the causal relationship with the wave properties and their manipulations, even if they could observe the consequences of related phenomena. Generally, students stay in a ray optic conception due to a lack of visual evidences of the nature of waves . Therefore, “misconceptions” are frequent. These misconceptions are coherent frameworks of ideas that persist in students’ minds and that are resistant to instruction [4, 5]. Moreover, manipulation errors often occur during lab works. For example, the phase shift between two interfering waves that exists in a Michelson interferometer clearly remains a complex phenomenon to understand. Consequently the link between the interference patterns and the adjustments of the interferometer is often not clearly established by the students. Moreover, once a Michelson interferometer is misaligned, significant time can be needed for the teacher to correctly realign it, which can slow down the sequence. Therefore, despite of very convincing experiments illustrating most of the optical effects, it appears that it is still difficult for most of the students to acquire deep understanding and good know-how in optics.
In this context, we think that there is a real need to develop a new generation of learning environment specifically designed to enhance training in the field of optics and photonics.
In this study, we proposed a new concept based on a hybrid optical bench that takes benefits from both the physical and the virtual worlds. This hybrid environment is mainly aimed at university students of optics. Such a hybrid setup presents not only the advantages of a real optical bench, including adjustment sensitivity, but it also presents the advantages of Augmented Reality (AR). AR is a technology that allows computer-generated virtual elements to be overlaid onto the real-world environment in real time [6, 7]. In other words, in AR environments both virtual and real objects can co-exist and interact simultaneously. It allows the users to see the physical world with virtual objects superimposed upon the physical world. Compared to fully virtual experiences, AR ones are closer to real experiences and complex spatial relationship can be easily visualized [8-10]. That way, we can provide the user extra information about what is happening. Numerous previous works have already shown the benefit of this technology in education (e.g.[11-19]) and the AR tools seem to be the best environments as feedback providers and simulators for numerous teachers (e.g. ).
Compared to a full virtual simulator where the replica of the optical bench is displayed on a computer screen, our hybrid approach takes advantage from the requirements of a real physical setup. It has been shown that interacting with digitally augmented physical spaces benefit to investigation . In particular, in the case of optics, the tuning of various physical elements (lenses, mirrors and so on) is essential. Because our hybrid environment will be close enough to a real optical bench, with similar mechanical adjustments (translation, rotation), the manipulation of the hybrid platform will be very close to what experimenters use to do in their daily activities.
As a first demonstration of the concept, we decided to implement the emblematic Michelson interferometer. We thus built what we call the AMI* for Augmented Michelson Interferometer (* in French, ami means friend).
Our motivation was to:
AMI has been designed to reproduce a Michelson interferometer experiment, with the difference that neither true optical elements (e.g. glass plates and mirrors) nor true light source are used. Physical replicas equipped with electronic sensors replace the optical components. A numerical simulation computes the resulting interference pattern. This result is displayed in real-time on a screen by way of video projection. In addition, digital information is projected on top of the experimental board. This allows augmenting the working area with pedagogical content. Figure 1 illustrates the overall framework of AMI.
The replicas that play the role of optical elements are physical elements without any optical component mounted on them, but equipped with electronic sensors, when needed. In AMI, we use such replicas to replace the two mirror holders (with sensors), the beam splitter (without sensor) and the moving lens (with sensor) at the output of the interferometer. One of the mirror holders is equipped with two rotating potentiometers aiming at modifying the (virtual) orientation of the mirror; the other mirror holder is equipped with a rotating potentiometer to modify its simulated position (distance to the screen). The replica of the lens is mounted on a linear potentiometer for adjusting the lens position in order to correctly observe either rings or linear fringes. The different elements of AMI are 3D printed. They mimic the real ones. Figure 2 illustrates the current sensors we use for replicating the Michelson interferometer.
The sensors are connected to the numerical simulation by way of an Arduino board, which is responsible of getting the values from all of the sensors. This system formats the data obtained and then sends them to the computer via an USB connection. The optic simulation is using these data for calculating the corresponding results.
We use a numerical simulation based on physics models that are executed in real time on a computer. The values obtained from the potentiometers (position and orientation of the mirrors and position of the lens) modify the parameters of the simulation. For example, in order to calculate the interference pattern for monochromatic light at any point P on the screen, we apply the formula given in Equation 1, where I0 is the light intensity one could measure with only one single arm of the interferometer operating, the other one being blocked; λ is the wavelength; and δ(P) is the difference between the optical path lengths of the two arms. With this numerical simulation we can generate images that match the ones that would be obtained with an actual Michelson experiment. As can be seen in Figure 3, these images are projected onto a white surface. In order to project them, we use a projector system in which a geometrical correction is performed.
Beyond the simulation of real optical phenomena, the system augments the workspace with digital information (see Figure 4). This information has educational purposes or serve as guidance to the users. For example, it lets the users see how the light travels through the elements that form AMI; it also provides information about these elements such as the current orientation angle or position. To achieve this, we perform a mapping of the physical space and the augmented space to ensure system consistency. This information is projected using a video projector situated at the top of the setup.
Finally, we use RFID tags mounted on physical objects to change the light source. Hence, to modify the source, and consequently the result of the simulation, the user just has to position the physical object at the location where he or she would position a true light source.
AMI makes easy a quick start in a preset condition, with predefined values of light source or any physical parameter such as refraction index and thickness of the beam splitter to highlight the necessity to compensate the path length with a compensator plate, focal length of the lens to point out the effect of the lens on the interference pattern, and so on. The simulation then evolves according to the user manipulations. The interaction has been made similar to the real interferometer. This way, we expect that users will feel familiar manipulating the hybrid setup, since it is like using a real Michelson interferometer.
Based on these characteristics, the hybrid setup is expected to help the student to acquire deeper understanding and technical skills for running experiments in optics, photonics and lasers.
We have described the current status of the AMI prototype. With the current setup, we can observe the interference pattern, the light path, and additional information to help the students understand what is happening.
Next versions of AMI will involve wireless components. This will let the creation of reconfigurable virtual optical systems in a dynamic way. Thus, users will be able to take one component (e.g. a mirror) and put it on the optical bench, just like they would do with the real setup. It will also let the creation of more advanced systems like lasers.
Another interesting future work will be the addition of new physical input conditions very difficult or even impossible to manage with an actual interferometer.
One of the goals of this project will be to develop a framework where people will be able to create or modify new optics content through the use of scripts.
We also plan to investigate in depth how well students will be able to learn and develop specific skills about interferometry. To this end, we will conduct a large study with the aim of assessing whether the AMI environment improves learning compared to a classical Michelson interferometer. This study, involving typically hundred students and their teachers, will be conducted at the Institute of technology of the university of Bordeaux during the fall semester 2015 and will focus on the ability of the AMI for transmitting knowledge and know-how to the students.
M. Donaldson, and T.-L. Books, [Children’s minds] Fontana Press London, (1978).Google Scholar
K. RAVANIS, and Y. PAPAMICHAËL, “Procédures didactiques de déstabilisation du système de représentations spontanées des élèves pour la propagation de la lumière,” Didaskalia, 7, 43–61 (1995).Google Scholar
P. Milgram, H. Takemura, A. Utsumi et al., “Augmented reality: A class of displays on the reality-virtuality continuum.” 282–292.Google Scholar
T. N. Arvanitis, A. Petrou, J. F. Knight et al., “Human factors and qualitative pedagogical evaluation of a mobile augmented reality system for science education used by learners with physical disabilities,” Personal and Ubiquitous Computing, 13(3), 243–250 (2007). https://doi.org/10.1007/s00779-007-0187-7Google Scholar
M. Billinghurst, and A. Duenser, “Augmented reality in the classroom,” Computer(7), 56–63 (2012).Google Scholar
B. E. Shelton, and N. R. Hedley, “Exploring a cognitive basis for learning spatial relationships with augmented reality,” Technology, Instruction, Cognition and Learning, 1(4), 323–357 (2004).Google Scholar
Y.-C. Chen, [A study of comparing the use of augmented reality and physical models in the chemistry education], (14–17 juin) Hong Kong(2006).Google Scholar
A. Di Serio, M. B. Ibanez, and C. D. Kloos, “Impact of an Augmented Reality System on Students’ Motivation for a Visual Art Course.,” Computers & Education, (2012).Google Scholar
H. Salmi, A. Kaasinen, and V. Kallunki, “Towards an Open Learning Environment via Augmented Reality (AR): Visualising the Invisible in Science Centres and Schools for Teacher Education,” Procedia – Social and Behavioral Sciences, 45(0), 284–295 (2012). https://doi.org/10.1016/j.sbspro.2012.06.565Google Scholar