PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12102, including the Title Page, Copyright information, Table of Contents, and Conference Committee listings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Developing automated threat detection algorithms for imaging equipment used by explosive ordnance disposal (EOD) and public safety personnel has the potential to improve mission efficiency and safety by automatically drawing a user’s attention to potential threats. To demonstrate the value of automated threat detection algorithms to the EOD community, Deep Analytics LLC (DA) developed an object detection algorithm that runs in real-time on resource constrained devices. The object detection algorithm identifies 10 common classes of improvised explosive device (IED) components in live video and alerts a user when an IED component is detected. In this paper we discuss the development of the IED component dataset, the training and evaluation of the object detection algorithm, and the deployment on the algorithm on resource constrained hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Some applications require high level of image-based classification certainty while keeping the total illumination energy as low as possible. Examples are minimally invasive visual inspection in Industry 4.0, and medical imaging systems such as computed tomography, in which the radiation dose should be kept "As Low As is Reasonably Achievable". We introduce a sequential object recognition scheme aimed at minimizing phototoxicity or bleaching while achieving a predefined level of decision accuracy. The novel online procedure relies on approximate weighted Bhattacharyya coefficients for determination of future inputs. Simulation results on the MNIST handwritten digit database show how the total illumination energy is decreased with respect to a detection scheme using constant illumination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The new coronavirus disease (COVID-19) comprises the public health systems around the world. The number of infected people and deaths are escalating day-to-day, which puts enormous pressure on healthcare systems. COVID-19 symptoms include fatigue, cough, and fever. These symptoms are also diagnosed for other pneumonia, which creates complications in identifying COVID-19, especially throughout the influenza season. The rise of the COVID-19 pandemic among individuals has made it essential to improve medical image screening of this pneumonia. Rapid identification is a necessary step to stop the spread of this virus and plays a vital role in early detection. With this as a motivator, we applied deep learning techniques to diagnose the coronavirus using chest X-ray images and to implement a robust AI application to classify COVID-19 pneumonia from non-COVID-19 for the respiratory system in these images. This paper proposes different deep learning algorithms, including classification and segmentation methods. By taking advantage of convolutional neural network models, we exploited different pre-trained deep learning models such as (ResNet50, ResNet101, VGG-19, and U-Net architectures) to extract features from chest X-ray images. Four datasets of chest X-ray images have been employed to assess the performance of the proposed methods. These datasets have been split into 80% for training and 20% for validation of the architectures. The experimental results showed an overall accuracy of 99.42% for the classification and 93% for segmentation approaches. The proposed approaches can help radiologists and medical specialists to identify the insights of infected regions for the respiratory system in the early stages.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep learning has been widely used in recent years to accomplish many tasks such as image classification, natural language processing, and image denoising amongst others. However, the process to create deep neural networks by trial and error can often be very repetitive and time consuming and it is not clear if the entire network architecture space is explored towards finding an optimum architecture. This paper presents a systematic and automatic way to design or find an optimal architecture of deep neural networks. First, a sensitivity analysis is carried out on the parameters of interest of a network in order to identify those parameters which are most influential to the performance of the network. A search space is defined based on these parameters. Reinforcement learning is then used to find an optimal architecture within this search space. In this paper, our developed method of finding an optimal network architecture is applied to the problem of image denoising. In particular, the emphasis is placed on the Densely Connected Hierarchical Network (DHDN). A resulting network, named ENAS-DHDN, is shown to marginally outperform the original network suggesting that the original network is close to optimal. After finding an optimal network, it is used to estimate the time to process Standard Definition (SD) and High Definition (HD) videos with a frame rate of 30fps indicating that real-time video denoising at the SD resolution is achievable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Labeled data are necessary for supervised neural network (NN) training. However, supervised learning does not scale favorably, because human intervention for labeling large datasets is expensive. Here, we propose a method that introduces interventions on the training set, and enables NNs to learn features in a self-supervised learning (SSL) setting. The method intervenes in the training data by randomly changing image contrast and removing input image patches, thus creating a significantly augmented training dataset. This is fed into an autoencoder (AE) network, which learns how to reconstruct input images given variable contrast and missing patches of pixels. The proposed technique enables few-shot learning of most relevant image features by forcing NNs to exploit context information in a generative model. Here, we focus on a medical imaging application, where large labeled datasets are usually not available. We evaluate our proposed algorithm for anomaly detection on a small dataset with only 23 training and 35 test images of T2-weighted brain MRI scans from healthy controls (training) and tumor patients (test). We find that the image reconstruction error for healthy controls is significantly lower than for tumor patients (Mann-Whitney U-test, p < 10-10), which can be exploited for anomaly detection of pathologic brain regions by human expert analysis of reconstructed images. Interestingly, this still holds for conventional AE training without SSL, although reconstruction error distributions for healthy/diseased subjects appear to be less dissimilar (p<10-7). We conclude that the proposed SSL method may be useful for anomaly detection in medical imaging, thus potentially enhancing radiologists' productivity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hardware used for AI/ML applications has trended towards more powerful and more power hungry devices. Currently, GPUs and some FPGA datacenter accelerator cards can consume 200-300W at full load. This makes using these devices impractical in many edge-computing applications. Some semiconductor manufacturers are beginning to build AI-accelerated silicon to improve issues relating to not only power consumption, but also form factor and cost. We examine one such device - the MAX78000 Artificial Intelligence Microcontroller. With synthesis software provided by the manufacturer, this microcontroller can perform inference with models trained with high level software such as Pytorch or Tensorflow. Before synthesis, quantization is performed on the model weights, which allows the model to occupy a much smaller memory footprint and perform more efficient calculations, but decreases model accuracy. We attempt to measure the reduction in performance and accuracy degradation that should be expected for this device by benchmarking CNN (Convolutional Neural Network) inference on datasets such as MNIST,1 a dataset consisting of handwritten digits, and CIFAR-10,2 a dataset containing images divided into ten classes. We benchmark inference using models such as SimpleNet and models found through NAS (Neural Architecture Search) by adding batch processing of test data sets to code generated by the AI8X synthesis from the MAX78000 SDK. Using the performance and accuracy results from the testing of the aforementioned datasets and neural network models, we attempt to predict the feasibility of performing inference for such CNN use cases such as real-time image recognition and object detection. For each case we examine which commonly used algorithms are or are not feasible with the resources limitations of the MAX78000 SoC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We exemplify a new mathematical theory and method for high efficiency sensing in real time, in contrast to compressed sensing theory, which is the most sensational topic of scientific research in the past century. Compared with compressed sensing, high efficiency sensing radically rectifies mathematical rationale while immensely improves technical performance. Based on discovery of new logical phenomenon, we make a broad spectrum of innovations on rationale, methodology, transform, techniques and so on, which working together result in dominant improvement in terms of both data quality and computation speed. High efficiency denotes high quality plus high speed. The pivotal innovation is simple yet powerful. Demo software and test data are both downloadable at our website www.lucidsee.ca .
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a formal inversion of the multiscale discrete Radon trasform, valid both for 2D and 3D. With the transformed data from just one of the four quadrants of the direct 2D Radon transform, or one of the twelve dodecants, in case of 3D Radon transform, we can invert ex- actly and directly, with no iterations, the whole domain. The computational complexity of the proposed algorithms will be O(N log N). With N the total size of the problem, either square or cubic. But this inverse transforms are extremely ill conditioned, so the presence of noise in the transformed domain turns them useless. Still we present both algorithms, and characterize its weakness against noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a local bar-shaped structure detector that works in real time on high-resolution images.
It is based on the Radon transform. Specifically in the muti-scale variant, which is especially fast because it works in integer mathematics and does not use interpolation.
The Radon transform conventionally works on the whole image, and not locally. In this paper we describe how by stopping at the early stages of the Radon transform we are able to locate structures locally.
We also provide an evaluation of the performance of the algorithm running on the CPU, GPU and DSP of mobile devices to process at acquisition time the images coming from the device's camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A deep machine learning-based electro-optics system (TurbNet sensor) was developed to measure atmospheric turbulence refractive index structure parameter (C2n) at a high temporal resolution by processing short-exposure intensity scintillation patterns. The TurbNet sensor was composed of a remotely located LED beacon, an optical receiver telescope with a CCD camera for capturing short exposure pupil-plane intensity scintillation patterns, and a Jetson Xavier Nx embedded AIcomputing platform to implement the deep neural network (DNN)-based processing of LED beam scintillation images. Performance of the TurbNet sensor was evaluated over a 7 km atmospheric propagation path.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Object detection from high resolution images is increasingly used for many important application areas of defense and commercial sensing. However, object detection on high resolution images requires intensive computation, which makes it challenging to apply on resource-constrained platforms such as in edge-cloud deployments. In this work, we present a novel system for streamlined object detection on edge-cloud platforms. The system integrates multiple object detectors into an ensemble to improve detection accuracy and robustness. The subset of object detectors that is active in the ensemble can be changed dynamically to provide adaptively adjusted trade-offs among object detection accuracy, real-time performance, and energy consumption. Such adaptivity can be of great utility for resource-constrained deployment to edge-cloud environments, where the execution time and energy cost of full-accuracy processing may be excessive if utilized all of the time. To promote efficient and reliable implementation on resource-constrained devices, the proposed system design employs principles of signal processing oriented data ow modeling along with pipelining of data ow subsystems and integration on top of optimized, off-the-shelf software components for lower level processing. The effectiveness of the proposed object detection system is demonstrated through extensive experiments involving the Unmanned Aerial Vehicle Benchmark and KITTI Vision Benchmark Suite. While the proposed system is developed for the specific problem of object detection, we envision that the underlying design methodology, which integrates adaptive ensemble processing with data ow modeling and optimized lower level libraries, is applicable to a wide range of applications in defense and commercial sensing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The real-time identification of targets on small unmanned aircraft systems (UAS) is a challenging task. One approach to achieving this task is the use of image recognition in deep learning networks on embedded processors. While it has been well established that the use of deep learning networks can help increase the reliability of image recognition applications, less research has been performed on the requirements needed for selecting an appropriate embedded processor that can meet the speed and efficiency needs for real-time target identification. The embedded processor must fit within the size, weight, and power (SWaP) constraints of small UAS, while still meeting the computational and memory requirements of the detection algorithms. To determine whether embedded processors meet these form factor requirements and other performance considerations, we evaluated and compared several commercially available embedded processors based on their physical specifications, performance using lightweight benchmark machine learning models developed for commercial use, and performance using a Navy-developed deep convolutional neural network (CNN) used for identifying the California Least Tern. This evaluation will provide information on the necessary hardware and software requirements for performing complex computing tasks on a UAS in real-time using image recognition deep learning networks on embedded processors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Real-time monitoring of insects has important applications in entomology, such as managing agricultural pests and monitoring species populations—which are rapidly declining. However, most monitoring methods are labor intensive, invasive, and not automated. Lidar-based methods are a promising, non-invasive alternative, and have been used in recent years for various insect detection and classification studies. In a previous study, we used supervised machine learning to detect insects in lidar images that were collected near Hyalite Creek in Bozeman, Montana. Although the classifiers we tested successfully detected insects, the analysis was performed offline on a laptop computer. For the analysis to be useful in real-time settings, the computing system needs to be an embedded system capable of computing results in real-time. In this paper, we present work-in-progress towards implementing our software routines in hardware on a field programmable gate array.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of location and instruction markers for multi-path planning enhancement in any set of unmanned aerial systems’ tasks is crucial to the coordination and effectiveness of the individual unmanned aerial vehicles (UAVs). This research implements OpenCV algorithms that allow multiple UAVs to use ArUco markers to receive data related to location and instruction for the purposes of multi-path planning. OpenCV algorithms are utilized to develop vision-based solutions that will enhance the real-time capabilities of the UAVs. The final goal for the multi-drone system entails inspecting and surveying objects for structural damage and applying the developed image processing algorithms to collected images to determine the significance of damage. This project utilizes OpenCV and Python libraries for multi-drone pathway planning by collecting, transmitting, and displaying real-world industrially valuable data over the network infrastructure as an application of Internet of Things (IoT).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study proposes a Hybrid CAD system, where the first stage consists of the handcraft segmentation, following a CNN based on the ResNet-34 architecture. In the segmentation stage, the rib cage (thorax region) is extracted using the K-means algorithm. The extraction of the nodules is performed in two steps, those attached to the pleura are found via a hysteresis threshold on the rib cage. The circumscribed and vascular nodules are extracted using morphological operations. The resulting segmentation masks are applied to the test images, decreasing the number of false positives. Finally, the resulting image is splitted of in patches to be classified by the ResNet-34 trained from scratch. Designed CAD system has been implemented on Google Collab platform and a standalone computer with Nvidia RTX 3090. The experiments with different CAD systems were performed on SPIE and LIDC-IDRI datasets demonstrating better performance of designed technique with reduction of false-positive objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Chest X-Ray imaging as a low resource diagnosing tool that can bring sufficiently information from the thorax, helping to a specialist to find patterns with purpose to diagnose the pneumonia disease. Also, due to the simplicity to obtain these images, Chest X-Ray is the top choice against CT, US, CT, or MRI imaging in paediatric patients. In this work, we propose a novel Pseudo-attention module based on handcraft features. Generating the Region of Interest (ROI) image of the thorax, avoiding the rest of the body and eliminating the labels contained in this type of test. After obtaining the ROI image, it is evaluated with several architectures based on Convolutional Neural Networks such as DenseNET, ResNET and MobileNET. Finally, the designed system employs Grad-Cam algorithm to provide the perceptual image of the relevant features significant in the classification of Pneumonia against Normal class. The system has demonstrated similar or better performance in comparison with the state-of-the-art methods using evaluation metrics such as Accuracy, Precision, Sensibility, and F1 score.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes an embedded implementation able to evaluate the pavement quality of road infrastructure by using a low-cost microcontroller board, an analog microphone placed inside the tyre cavity and a Convolutional Neural Network for real-time classification. To train the neural network, tracks audio were collected employing a vehicle moving at different cruise speeds (30, 40, 50 km/h) in the area of Pisa. The raw audio signals were split, labelled and converted into images by calculating the MFCC spectrogram. Finally, the author designed a tiny CNN with a size of 18KB able to classify five different classes: good quality road, bad quality road, pothole-bad road, silence and unknown. The CNN model achieved an accuracy equal to 93.8 % on the original model and about 90 % on the quantized model. The finale embedded system is equipped with BLE communication for the transmission of information to a smartphone equipped with GPS and obtain real-time maps of road quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a parallel scheme for suppressing speckle noise in SAR images. The designed technique is based on forming 3D arrays of a clustered image by areas and using Maximum a Posteriori (MAP) estimation, where the a priori information is obtained by the Discrete Wavelet Transformation (DWT), improving the despeckling quality. Moreover, a variant of the bilateral filter is used as a post-processing stage to recover and enhance edges’ quality after the filtering procedure. The proposed scheme was implemented in serial and two parallel versions. The first one uses OpenMP to parallelize over a multi-core CPU, and the second utilizes CUDA to be executed in a GPU. Experimental results have demonstrated that the framework guarantees a good despeckling performance on SAR images obtained from the TerraSAR-X database, considering objective quality criteria such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM) and Edge Preservation Index (EPI). Furthermore, the parallel implementations’ simulation results present their efficiency for a real-time environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper assesses the efficacy of self-supervised learning in the DeepDR Diabetic Retinopathy Image Dataset (DeepDRiD). Recently, self-supervised learning has achieved great success in the field of Computer Vision. Particularly, self-supervised learning can effectively serve the field of medical imaging where a large amount of labeled data is usually limited. In this paper, we apply the Bootstrap Your Own Latent (BYOL) approach to grade diabetic retinopathy which scores the lowest among the MedMNIST dataset. With the pre-trained model using BYOL, we evaluate the efficacy of the BYOL approach on DeepDRiD following fine-tuning protocols. Further, we compare the performance of the model with the model from scratch and proved the effectiveness of BYOL in DeepDRiD. Our experiment shows that BYOL can boost the performance of grading diabetic retinopathy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In nuclear medicine imaging coded aperture is used to improve sensitivity. Amplification of quantum noise affect the inverse filtering reconstruction. Although it is improved by Wiener filtering, the major problem is small terms in the spectral distribution of coded masks and so, variable coded aperture (VCA) design is used. The unique variable design enables to overcome the small terms in the Fourier transform exists in static array. However, traces of duplications are still remaining. We present combination of VCA with deep-convolutional neural network to remove noise stems from the limited abilities of inverse filtering to achieve higher SNR and resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photoacoustic imaging is a new noninvasive medical imaging method in recent years. It combines the advantages of high resolution and rich contrast of optical imaging with the advantages of high penetration depth of acoustic imaging. It can provide safe, high-resolution and high – contrast imaging. As an important branch of photoacoustic imaging, photoacoustic microscopy can achieve higher-resolution imaging. However, the poor axial resolution relative to lateral resolution has always been a limitation. In recent years, deep learning has shown certain advantages in processing of photoacoustic image. Therefore, this paper proposes to integrate the U-net semantic segmentation model with the simulation platform of photoacoustic microscopy based on K-Wave to improve the axial resolution of photoacoustic microscopy. Firstly, the dataset (including B-scans and their corresponding ground truth images) required for deep learning is obtained by using the simulation platform of photoacoustic microscopy based on K-Wave. The dataset is randomly divided into training set and test set with a ratio of 7:1. In the training process, the B-scans are used as the input of U-Net based convolutional neural network architecture, while the ground truth images are the desired output of the neural network. Experimental measurements were performed on carbon nanoparticles, which measured an increase in axial resolution by a factor of ~ 4.2. This method further improves the axial resolution, which helps to obtain the structural features of the tissue more accurately, and provides theoretical guidance for the treatment and diagnosis of diseases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.