One of my graduate students decided to use a popular commercial software in order to investigate scattering by three-dimensional (3-D) objects. Within a few weeks, he came up with very nice plots of scattering patterns as well as colorful spatial profiles of scattered fields. As the 3-D objects were of complicated shapes and my brain is still wired to analytical treatments of scattering, I asked him to repeat his calculations for a sphere made of an isotropic material. After looking at the diagrams he supplied, I decided that something had gone wrong.
I then asked him to solve the problem of scattering by an isotropic sphere analytically, and compare his results with those yielded by the commercial software. For good measure, he was also urged to validate his results against a couple of reference works.1,2 His analytical results agreed completely with the reference works, but not with the data from the commercial software. He took another month before the latter came in reasonable agreement with analytical results.
The commercial software my student used offers numerous options to the user at the input stage. The options must be correctly chosen in order to get acceptable outputs. As problems gain in complexity, the correct choices may not be the obvious ones. One must therefore confirm not only against results published previously using the same software but also against well-known and trusted results obtained either analytically or with techniques different from the ones used in the software.
My student’s experience made me reflect on my experiences as a reviewer of journal papers for the last quarter of a century. At the beginning of that period, commercial software was rarely used to solve scattering problems. Researchers did use digital computers but wrote their own codes, which were validated against well-known and trusted results. A typical research paper would mention this validation and even contain a figure or two in support.
That level of validation has gradually eroded. When I have requested validation in recent years, authors have typically responded that their commercial software is tested so well by its source company that validation is unnecessary. I agree that source companies do test their products extensively, but the options still need to be chosen at the input stage by the user. Therefore, validation remains essential.
When validation against well-known and trusted results is either infeasible or unlikely to add confidence in the results provided, the user options must be stated in the research paper so that an interested reader could verify the presented results. But user options are rarely mentioned in research papers. Even descriptions of the underlying methods are vanishing, with many authors being content to just state that they used XYZ software of CDE Inc., Anytown, Somecountry. Presumably, many journal editors accept such skinny “descriptions” as adequate.
A similar problem exists in experimental papers. A certain piece of equipment was used to fabricate or process a material, but it is not enough to provide the model number and the name of the manufacturer. What temperature and pressure settings were used? What was the evaporation rate? What current and voltage were set? And so on. All relevant parameters must be presented in a research paper.
Boutique journals routinely relegate descriptions of methods, numerical as well as experimental, to an appendiceal section printed in small fonts, much as an afterthought. Increasingly in a few journals, methods are being incorporated as supplementary information that may not have been subjected to peer review. Being trendsetters, these journals reinforce the message that results alone matter. I fear that in due course of time, data will be analyzed and published by researchers who lack understanding of the methods used to obtain those data and, therefore, the reliability thereof.
As a vision of what can happen, just look at the careless regurgitation of numerical information in U.S. newspapers. While pretending to be grounded in solid analyses, newspaper articles are usually superficial in meaning.3 Despite the proliferation of news sources, a large fraction of the citizenry of the world’s largest technoscientific powerhouse remains technoscientifically illiterate, which is reflected every so often in quixotic pronouncements by U.S. politicians on climate change, stem-cell research, and biological evolution, among other topics.
Let me reassure the readers of the Journal of Nanophotonics that authors are strongly encouraged to provide reasonably complete descriptions of their methods.