Open Access
1 August 2006 “Try my image, please.”
Author Affiliations +

Recently, a reviewer asked an Associate Editor, Giordano Beretta, to send two brightly colored images to the author of the paper he or she was reviewing. The reviewer asked “Can the authors process the attached two images using the proposed algorithm, and show me the results?” Giordano commented,

The problem we have with imaging algorithms is that it is often very hard to assess the merit of the proposed algorithm. Oftentimes people use a couple of “standard images” like Lena, a scan from a Playboy magazine centerfold, but rarely do they show their results on a larger set of images, especially images that are hard to reproduce. Since there is no standard test suite for image processing… reviewers have to rely mostly on the algorithm’s description.

Researchers working in this field have a set of images they collect to use for testing typical cases where algorithms fail. For example, anybody working on halftoning knows exactly which algorithm’s shortcoming causes which artifact in Lena. Unfortunately nobody has created a standard collection.

The prospect of a reviewer testing the concepts and conclusions is a novel one. Today, when journals publish results that subsequently are found to be fraudulent, misguided, or negligent, a standard defense of peer-reviewed publication is that reviewers cannot duplicate the experiment or easily test the conclusions of the paper. Rather, it is said, reviewers can only serve to detect error and correct concepts and interpretations, but they cannot detect fraud. For most research, this may be true. But research consisting of simulated experiments might generate a new set of rules that provides both authors and reviewers with ways to validate the results of a paper.

Today researchers are divided between the have-nots and have-computers. Earlier, there were experimental and theoretical divisions in the fields of science and engineering. Theoreticians usually developed a series of equations and evaluated them as best they could using approximations or logarithmic computations when necessary. Researchers at first built, and then later bought, the instrumentation needed to carry out their experiments. With the advent of powerful personal computers, another division of research was established, simulations.

This type of research allowed researchers using personal computers to establish computational techniques within their fields and produce credible work with modest investments, or they could hook into the newly evolving Internet for access to large mainframes. The evolution continued so that in the space of a decade individual desktop computers were assembled in clusters to provide mainframe-computing power at a fraction of the mainframe cost.

After that brief period, the division of the have-nots and have-computers returned. Theoretical research with ever more complicated equations to evaluate became simulation research. So, a newly hired theoretician, instead of requesting startup money for four dozen pads of paper and five boxes of pencils, asks for a state-of-the-art computer cluster and the air conditioning required to handle the load. The have-nots don’t require pencil and paper either, but they must try to do their research and establish their reputation using pretty much the same type of resources as their colleagues. I would note that this rise in computational power has also affected experimental physics. Now simulations are used to run experiments that could not be done any other way. From the inside of a hot star to the tip of an atomically sharp needle, multiple configurations of the system under study can be established and the laws of physics are allowed to take their course.

Certainly this limitation impacts the type of research that can be done. These days it depends on what machine your keyboard talks to and what machine sends back the data you need. Those without a large amount of computing power must choose tractable problems. In optical engineering, you might think that optical design would be an appropriate choice. But the widespread availability of design programs allows anyone to establish and optimize a system. Also, the sophistication of fabrication technology needed to make systems with advanced performance dictates that almost all design studies remain paper studies.

Two of the fields in which a good deal of what might be called single-PC simulation research is done are optical communication and image processing. Much of the optical communication work is in network theory and fiber design. In image processing there is a lot of work published in image compression, watermarking, and encryption. This type of research recommends itself to single-PC systems because the goals are usually computational efficiency and reduced data storage. As noted earlier, this type of research might address one of the objections to peer review, the inability to test the conclusions of that paper.

Should such an arrangement be permitted? If so, what kind of procedures would be necessary so that both reviewers and authors will know what they can ask for and what is expected of them? Should a reviewer be able to furnish an image or just suggest an image that would enable the reviewer to judge the accuracy of the conclusions? There would be no requirement that the image or the results from its use be published as part of the paper. Unless, of course, the author believes it would strengthen the paper. The goal is to let the reviewer arrive at an informed and fair evaluation of the paper.

©(2006) Society of Photo-Optical Instrumentation Engineers (SPIE)
Donald C. O'Shea "“Try my image, please.”," Optical Engineering 45(8), 080101 (1 August 2006). https://doi.org/10.1117/1.2335469
Published: 1 August 2006
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
Back to Top