Open Access
1 March 2010 Optical coherence tomography with online visualization of more than seven rendered volumes per second
Joachim Probst, Dierck Hillmann, Eva M. Lankenau, Christian Winter, Stefan Oelckers, Peter Koch, Gereon Hüttmann
Author Affiliations +
Abstract
Nearly real-time visualization of 3-D volumes is crucial for the use of optical coherence tomography (OCT) during microsurgery. With an ultrahigh speed spectral domain OCT coupled to a surgical microscope, on-line display of 7.2 rendered volumes at 87 megapixels per second is demonstrated. Calculating the A-scans from the spectra is done on a quad-core personal computer (PC), while dedicated software for the 3-D rendering is executed on a high performance video card. Imaging speed is practically only limited by the readout of the camera. First experiments show the feasibility of real-time 3-D OCT for guided interventions.

1.

Introduction

Optical coherence tomography was introduced more than 15years ago for imaging retinal structures.1 Since then, imaging depth only increased marginally, resolution was improved by less than a factor of 10, but imaging speed was boosted by more than three orders of magnitude, from less than 100 to more than 300,000 A-scans per second.2 Instead of taking only still images of anatomical structures, the increased speed of OCT allows us now to image volumes nearly in real time. This enables not only scans of larger tissue surfaces like the esophagus,3 colon,4, 5 or vessel,6 but also opens new applications beyond simple diagnosis. Noncontact volumetric imaging with less than 15-μm resolution can guide microsurgery at the eye,7, 8 in otolaryngology,9 and in other medical disciplines.10 First attempts for an intrasurgical use of OCT failed mainly because of low imaging speed.7, 8 Full use of OCT during surgical procedures can only be made with rapid 3-D imaging, since otherwise it is difficult to bring the image field in coincidence with the relevant but dynamically changing anatomical structures. High speed OCT imaging by spectrometer-based systems11 and swept-source OCT were demonstrated,12, 13 but data processing and visualization of the data had to be done off-line. However, only displaying measured tissue volumes on-line to the physician can fully exploit the potential of OCT for intrasurgical use. Real-time display of volumetric OCT data also solves the problem of storing the vast amount of data, which are generated by high-speed OCT systems.

We present an ultrahigh speed OCT system integrated into a surgical microscope, which is capable of processing, rendering, and displaying more than seven volumes with 12 million pixel per second by using a PC with a high performance graphics accelerator card. Best performance was reached by distributing the calculation of the A-scans to the four cores of the PC, whereas the preprocessing and rendering was done in real time with dedicated software on a graphic processing unit (GPU).

2.

Material and Methods

OCT data were acquired at a wavelength of 840nm through a surgical microscope (MÖLLER Hi-R 1000, Möller-Wedel GmbH, Wedel, Germany) using a fast two-axis galvanometric scanning unit (6210, Cambridge Technology, USA) coupled to the camera port [Fig. 1 ]. The lateral resolution was between 15 and 20μm , depending on the magnification used with the surgical microscope. The output of the fiber interferometer was interfaced to a modified commercially available spectral domain OCT (Hyperion, Thorlabs HL AG, Lübock, Germany ), which uses a linear-k spectrometer14 with a fast complementary metal-oxide semiconductor (CMOS) camera (Sprint spL 4096-140k, Basler Vision Technologies, Ahrensburg, Germany ) and an external light source (Broadlighter BS840-B-I-20, Superlum, County Cork, Ireland ). With a spectral band width of 40nm , a resolution of approximately 15μm was possible in air. The camera has two parallel lines with 4096 rectangular pixels each, which were binned to use the full height of the spectrum. With the full number of pixels, only 70,000 spectra per second could be measured. By reading only a smaller number of pixels, readout speed was increased. With 1024pixels , an A-scan rate of 210kHz was achieved. Due to the low full-well capacity of the camera pixel, the sensitivity was only 78dB . A 17-dB roll was measured at half of the 5.7-mm depth range. When the 2048pixels were used with a binning of two pixels along the spectral axis image (effectively 1024pixels for data evaluation), image quality was significantly improved (sensitivity 85dB , roll off 10dB ) due to the higher effective full-well capacity, reduced cross talk between the spectral channels, and a slightly improved depth resolution. With the same grating and superluminescent diode (SLD), the image depth and the read-out speed (which was not improved by binning) were both halved. Preprocessing, fast Fourier transform (FFTW library15), and postprocessing of the A-scans were done in parallel on all four cores of a 2.6-GHz Intel Core-2-Quad or Xeon processor. For 3-D rendering and display, the volumetric OCT data were transferred to a high-performance video card (NVidia GTX280 with 1GB of memory), where a dedicated software was used for 3-D visualization of the data. All timing-critical software was written in C++ on the Microsoft Windows operating system using DirectX and the NVidia CUDA framework.

Fig. 1

(a) Setup of the real-time OCT coupled to a surgical microscope. (b) Principle of volume rendering by texture mapping.

026014_1_055001jbo1.jpg

Ray tracing (volume ray casting), and (3-D) texture mapping are the two options for rendering volumetric data. 16, 17 Due to the better general performance on modern video cards, texture mapping was used here, which projects a 2-D image onto the surface of a 2- or 3-D object. Modern graphics accelerators are highly optimized for work with surfaces and textures in three dimensions. As texture mapping works only with surfaces and here volume elements (voxels) have to be displayed, stacks of planes were defined that sliced through the volumetric data [Fig. 1]. Transparency was assigned to the textures on the planes to allow looking into the volume. The opacity of the pixel on the textures was calculated from the intensity of the OCT data voxels, which were crossed by the texture planes by a simple windowing algorithm: outside a user-chosen intensity range, all pixels were set either completely transparent or opaque white, depending on the corresponding OCT value below or above the windowing range. Within this range, the opacity was scaled linearly with the OCT signal.

3.

Results

Rendering by the GPU was considerably faster than rendering by the CPU on the main board. A comparison showed more than 30 times increased speed when compared to one core; the data throughput was 2GBs instead of 60MBs . For the sake of faster processing, only stacks of planes aligned perpendicular either to the x , y , or z axis were calculated. The stack that was nearest aligned to the viewing direction was chosen to prevent the user from looking through the space between the planes. With this approach, the texture-carrying planes were not aligned perpendicular to the viewing direction all the time, which produces small artifacts especially when the chosen stack chances abruptly by a chance of the viewing direction. But with typical OCT data, this simple algorithm produced no disturbing artifacts.

For on-line visualization, the CMOS camera was continuously read out at 215,000 spectra per second. Complete volumes with 80 B-scans, each B-scan consisting of 380 A-scans with 512pixel each, were continuously acquired in 141ms (Fig. 2 ). Due to the fly-back time of the galvanometric scanners, only 300 A-scans could be used for further processing. The readout of one volume and calculation of the A-scans from the spectra took 116ms . Since each pixel was represented as a 4-byte floating point number, 12 million bytes had to be transferred to the video card, which was accomplished in less than 11ms . Rendering itself took less than 5ms , and a “clean-up” in less than 10ms completed the cycle. Only for the 3-D rendering was the throughput around 2GBs , including the data transfer between the main board and video card. Complete serial processing, including the calculation of the FFT and rendering of one volume, was done in 139ms , which was slightly shorter than the 141ms needed to read out the camera. The acquisition speed of the camera was therefore the speed-limiting factor.

Fig. 2

Timing of the different processing steps for the OCT volumes.

026014_1_055001jbo2.jpg

With the current size of the spectra (1024 elements), the FFT was calculated faster on the CPU than on the GPU. Starting at 8000pixels per spectra, a higher performance of the NVidia CuFFT library running on the GPU is expected.18 If necessary, a further reduction of the cycle times and an increase of data throughput could be achieved by parallelizing the calculation of the A-scans on the CPU with the rendering by the GPU, which were done in this work sequentially.

Image quality was sufficient for the visualization of structures in scattering tissues like sweat glands in the finger tip [Fig. 3 ]. It was rather limited by the OCT setup, including the CMOS camera, than the processing of the data. Binning considerably improved the visual impression, the signal-to-noise ratio (SNR), and roll-off [Fig. 3], but reduced the pixel rate of the camera to approximately 50%. The possibility of tissue manipulation under OCT was tested by cutting an onion and removing different onion skin layers (Video 1 ).

Fig. 3

Screen shot of real-time 3-D OCT images. (a) Finger tip imaged at 7.2volumess . (b) Higher quality image acquired from two-pixel binned spectra at 3.6volumess .

026014_1_055001jbo3.jpg

Video 1

Cutting of thin tissue layer of an onion with a scalpel (arrow). Position and thickness of the tissue layer can be visualized and analyzed in a quantitative manner. The video was recorded with 3.6volumess (MPEG, 46MB ).

026014_1_055001jbov1.jpg
10.1117/1.3314898.1

4.

Discussion and Conclusion

Transfer of the OCT data to the GPU and volumetric texture mapping contributed only a small fraction to overall processing time, and only 20% of the cycle time was actually used for 3-D rendering. Therefore, there is still processing capacity for additional steps in the rendering process like averaging, color coding, or the calculation of virtual B-scans. The speed of the OCT system was already compromised by the insufficient speed of the galvanometric scanners. More than 20% of the pixels are lost during the fly-back. Even faster OCT systems will require a new architecture for the lateral beam deflection, e.g., bidirectional scanning, resonance, or microelectromechanical systems (MEMS) scanners.

For intersurgical work, image quality and speed seem to be sufficient. However, optimization of the optical setup and binning of pixels will further increase image quality. During a surgical procedure, tissue movements in x , y , and z directions will still be a challenge for our system due to the unavoidable roll-off and the limited number of lateral voxels (80×380) . Automatic z-tracking of the reference delay line and tracking of lateral movements of the tissue or the instruments will be implemented in future versions of the OCT surgical microscope to improve applicability.

Acknowledgments

This work was supported by the local government of Schleswig-Holstein, Germany (HWT 2007-14 H) and the European Union research program FP7-HEALTH-2007-A (201880 FUN OCT).

References

1. 

D. Huang, E. Swanson, C. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science, 254 1178 (1991). https://doi.org/10.1126/science.1957169 0036-8075 Google Scholar

2. 

R. Huber, D. C. Adler, and J. G. Fujimoto, “Buffered Fourier domain mode locking: unidirectional swept laser sources for optical coherence tomography imaging at 370,000liness,” Opt. Lett., 31 2975 –2977 (2006). https://doi.org/10.1364/OL.31.002975 0146-9592 Google Scholar

3. 

B. J. Vakoc, M. Shishko, S. H. Yun, W. Y. Oh, M. J. Suter, A. E. Desjardins, J. A. Evans, N. S. Nishioka, G. J. Tearney, and B. E. Bouma, “Comprehensive esophageal microscopy by using optical frequency-domain imaging (with video),” Gastrointest Endosc Clin. N. Am., 65 898 –905 (2007). 1052-5157 Google Scholar

4. 

X. Qi, Y. Pan, Z. Hu, W. Kang, J. E. Willis, K. Olowe, M. V. Sivak, and A. M. Rollins, “Automated quantification of colonic crypt morphology using integrated microscopy and optical coherence tomography,” J. Biomed. Opt., 13 054055 (2008). https://doi.org/10.1117/1.2993323 1083-3668 Google Scholar

5. 

D. C. Adler, C. Zhou, T. H. Tsai, J. Schmitt, Q. Huang, H. Mashimo, and J. G. Fujimoto, “Three-dimensional endomicroscopy of the human colon using optical coherence tomography,” Opt. Express, 17 784 –796 (2009). https://doi.org/10.1364/OE.17.000784 1094-4087 Google Scholar

6. 

P. Barlis and J. M. Schmitt, “Current and future developments in intracoronary optical coherence tomography imaging,” EuroIntervention, 4 529 –533 (2009). Google Scholar

7. 

R. Heermann, C. Hauger, P. R. Issing, and T. Lenarz, “Application of optical coherence tomography (OCT) in middle ear surgery,” Laryngorhinootologie, 81 400 –405 (2002). https://doi.org/10.1055/s-2002-32213 0935-8943 Google Scholar

8. 

G. Geerling, M. Müller, C. Winter, H. Hoerauf, S. Oelckers, H. Laqua, and R. Birngruber, “Intraoperative 2-dimensional optical coherence tomography as a new tool for anterior segment surgery,” Arch. Ophthalmol. (Chicago), 123 253 –257 (2005). 0003-9950 Google Scholar

9. 

T. Just, E. Lankenau, G. Huttmann, and H. W. Pau, “Intra-operative application of optical coherence tomography with an operating microscope,” J. Laryngol. Otol., 123 1027 –1030 (2009). https://doi.org/10.1017/S0022215109004770 0022-2151 Google Scholar

10. 

H. J. Böhringer, D. Boller, J. Leppert, U. Knopp, E. Lankenau, E. Reusche, G. Hüttmann, and A. Giese, “Time-domain and spectral-domain optical coherence tomography in the analysis of brain tumor tissue,” Lasers Surg. Med., 38 588 –597 (2006). https://doi.org/10.1002/lsm.20353 0196-8092 Google Scholar

11. 

B. Potsaid, I. Gorczynska, V. J. Srinivasan, Y. Chen, J. Jiang, A. Cable, and J. G. Fujimoto, “Ultrahigh speed spectral/Fourier domain OCT ophthalmic imaging at 70,000 to 312,500 axial scans per second,” Opt. Express, 16 15149 –15169 (2008). https://doi.org/10.1364/OE.16.015149 1094-4087 Google Scholar

12. 

M. W. Jenkins, D. C. Adler, M. Gargesha, R. Huber, F. Rothenberg, J. Belding, M. Watanabe, D. L. Wilson, J. G. Fujimoto, and A. M. Rollins, “Ultrahigh-speed optical coherence tomography imaging and visualization of the embryonic avian heart using a buffered Fourier Domain Mode Locked laser,” Opt. Express, 15 6251 –6267 (2007). https://doi.org/10.1364/OE.15.006251 1094-4087 Google Scholar

13. 

W. Y. Oh, S. H. Yun, B. J. Vakoc, M. Shishkov, A. E. Desjardins, B. H. Park, J. F. de Boer, G. J. Tearney, and B. E. Bouma, “High-speed polarization sensitive optical frequency domain imaging with frequency multiplexing,” Opt. Express, 16 1096 –1103 (2008). https://doi.org/10.1364/OE.16.001096 1094-4087 Google Scholar

14. 

Z. Hu and A. M. Rollins, “Fourier domain optical coherence tomography with a linear-in-wavenumber spectrometer,” Opt. Lett., 32 3525 –3527 (2007). https://doi.org/10.1364/OL.32.003525 0146-9592 Google Scholar

15. 

M. Frigo and S. G. Johnson, “The Design and Implementation of FFTW3,” Proc. IEEE, 93 216 –231 (2005). https://doi.org/10.1109/JPROC.2004.840301 0018-9219 Google Scholar

16. 

R. Fernando, GPU Gems: Programming Techniques, Tips and Tricks for Real-Time Graphics, (2004) http://http.developer.nvidia.com/GPUGems/gpugems_ch39.html Google Scholar

17. 

S. Stegmaier, M. Strengert, T. Klein, and T. Ertl, “A simple and flexible volume rendering framework for graphics-hardware-based raycasting,” International Workshop on Volume Graphics ’05, (2005) http://www.vis.uni-stuttgart.de/eng/research/fields/current/spvolren/ Google Scholar

18. 

H. Merz, “CUFFT vs FFTW comparision,” http://www.science.uwaterloo.ca/~hmerz/CUDA_benchFFT/ Google Scholar
©(2010) Society of Photo-Optical Instrumentation Engineers (SPIE)
Joachim Probst, Dierck Hillmann, Eva M. Lankenau, Christian Winter, Stefan Oelckers, Peter Koch, and Gereon Hüttmann "Optical coherence tomography with online visualization of more than seven rendered volumes per second," Journal of Biomedical Optics 15(2), 026014 (1 March 2010). https://doi.org/10.1117/1.3314898
Published: 1 March 2010
Lens.org Logo
CITATIONS
Cited by 29 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Optical coherence tomography

Visualization

Video

Cameras

Volume rendering

Microscopes

Tissues

RELATED CONTENT


Back to Top