Open Access
1 November 2002 Fraud
Author Affiliations +

100211j.d1.jpg

During the past year the corporate stock scandals have affected everyone who has a stock or a retirement plan. It is with some trepidation that I open the quarterly reports from my mutual fund these days. Although it is difficult to accept the large drops in values of the funds from month to month, I confess I greeted the large increases with happy anticipation just a few years earlier. If this stock market roller-coaster ride were just a historical fluctuation, it might be easier to accept. But when it is revealed that the stock values were manipulated through fraud and deception, it is harder to keep one’s cool.

Certainly the main motivation for fraud was greed, but deception was also carried out to meet the expectations of others. In recent years, if the performance of a company doesn’t equal or better the profit estimates of some “experts” on Wall Street—even if it falls a penny short—the stock takes a beating and the worth of all those with stock options in the company drops dramatically. It makes for a most pernicious situation: (1) only short-term profits are used to indicate the value of an enterprise and (2) those with much to lose provide the information that determines the profit calculation. Meeting expectations, one of the most common and difficult activities we engage in, is present in scientific research and development as well as finance.

In the process of doing research, nothing ever goes the way you expect it to. Many times, the data isn’t as definitive as you would like. Although the pressure in engineering tends to come from the product cycle or the annual report, in research it may arise for a number of reasons: from grant renewals to the fear of being scooped to the need for professional acclaim. And if the pressure is too great or if the researcher’s moral gyroscope is wobbly, disasters can result.

Consider the recent investigation into the work of Dr. J. Hendrik Scho¨n and his collaborators at Bell Labs. On September 26th a committee of scientists appointed by Bell Labs issued a report on the investigation into Scho¨n’s research, described in 17 papers. Although there were numerous authors, all the papers had Dr. Scho¨n as an author. A report in the New York Times stated:

   The findings, in effect, dismiss as fiction results from 17 papers that had been promoted as major breakthroughs in physics, including claims last fall that Bell Labs had created molecular-scale transistors.

   The committee concluded that data in the disputed research, published between 1998 and 2001, had been improperly manipulated, even fabricated, confirming suspicions raised by outside scientists in May. The committee placed the blame for the deceit on one Bell Labs scientist, Dr. J. Hendrik Scho¨n.

Several aspects of this incident are intriguing. One was that Dr. Scho¨n was publishing papers at the rate of one every eight days, according to the Times. Although that should have sent up some flags somewhere, I suppose in some circles it just looked like diligent effort. Scho¨n had said that it was all a series of mistakes. But to a researcher, it is difficult to accept such an explanation when the report revealed that he had told the committee he had deleted almost all of the original data files because he needed hard disk space to store other files and that he kept no laboratory notebooks.

One might ask why the reviewers of those 17 peer-reviewed papers approved them for publication. The responsibility of reviewers of any journal including Optical Engineering is to assure that the research makes sense. They must ensure that it is understandable to those working in the field and, hopefully, to other scientists interested in the research. But it would be asking for the near-impossible for the reviewers to reproduce the experiments and verify the results. They can only rely on the good faith of the researchers whose work they are reviewing and use simple rationality checks, dimensional analysis, common sense, and comparison to papers on similar work to guide their evaluation.

It would seem that no scientist goes into the lab to perpetrate a fraud. Rather, having invested time, effort, and probably someone else’s money, he or she comes up against a common occurrence in research that nothing much is found or the effect is weak or inconsequential. At that point, a researcher may fervently believe that with additional work he can achieve a particular result that reflects the way he thinks the world works. So he doctors the data or selects certain sets of data and publishes the results, assuming that in the next few months he will get the real data he needs to prove his point. Embezzlers use the same logic. They can’t put back the money they have taken, but they still have to balance the books. They assume that when the stock they bought with the “borrowed” money takes off, they will be able to return the money and no one would be the wiser. Then the stock market tanks. But nature isn’t like the stock market and in research it isn’t the money.

One’s reputation is the coin of the realm in scientific research. If you produce important, interesting, and new research, you will be asked to give invited papers at conferences, to chair conferences, and to sit on funding evaluation committees. Although there may be a little money involved (an award, a speaking honorarium), it is a heady experience. But like artists and stockbrokers, you are expected to come up with something new or more profitable on a regular basis. At some point, the researcher who commits fraud gets trapped. Although there may be no audit committees in science to go over our notebooks (an intriguing concept), there are those who want to get in on this bull market in research (hot topic, easy pickings, rudimentary samples, simple concepts) and capture some of the glory. And when they cannot reproduce the simplest of the published results, questions begin to arise.

Fraud of any consequence in science is not widespread. Not because scientists are ever vigilant, questioning every paper published, but because important results demand explanation, elaboration, and, when possible, reduction to practice. Thus, most scientists look upon such events with amazement. The logic is obvious: if a result is important, it will be tested. If it is fraudulent, it will be exposed. If it is exposed, one’s reputation will be lost.

©(2002) Society of Photo-Optical Instrumentation Engineers (SPIE)
Donald C. O'Shea "Fraud," Optical Engineering 41(11), (1 November 2002). https://doi.org/10.1117/1.1523055
Published: 1 November 2002
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
Back to Top