Global optimization of imaging lenses, non-sequential ray tracing for illumination (or stray light) analyses, FDTD modeling of 3D photonic devices, optical field calculations by the Gaussian beamlet method, and image simulations using very large 2D FFTs can be some of the most computer intensive tasks in optical engineering. An example of the first on reasonably–priced Intel/Microsoft platforms is used to test whether relevant single-core computational speeds have kept up with predictions, from the early 1990’s 32-bit 486s running extended DOS to today’s 64-bit CPUs running Windows 10. It is found that before around 2005, performance doubled every 18 months (equivalent to 100 times every decade) as predicted by House’s variant of Moore’s 1975 law. At that time thermal issues lead to a more efficient microarchitecture and a relative stagnation of CPU frequencies so progress slowed significantly, but could be mostly reclaimed by the effective utilization of a large number of computational cores. Therefore, previously published results comparing multi-core performance on the other tasks will be updated using nearly a decade newer CPUs and GPUs. Even though they have much higher clock rates and core counts than the old hardware, their relative performances fall short (and sometimes far short) of expectations.
|