PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12831, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The classification of brain tissue plays a key role in brain tumor diagnosis and treatment. It revolves around post-surgical histochemical staining, which is often time consuming and delays follow up treatment. Identifying tumor boarders during tumor resection is essential for an efficient therapy minimizing removed healthy tissue and maximizing removed tumor tissue. The different approaches in use are either expensive and time consuming or limited to certain tumor types. We propose a real-time in vivo label free classification approach, applicable for both demands. Based on autofluorescence properties, a label-free differentiation between tissue types is possible. Therefore, a multicore fiber (MCF) based endoscope is designed to fit into biopsy needles used during diagnosis and to be used as a handheld probe during tumor resection. It allows illuminating and imaging through the same MCF, minimizing the endoscope to a submillimeter diameter. Currently, autofluorescence images are not used in pathology. Thus, medical doctors cannot interpret them. We use a neural network for diagnosis, bridging this gap. One problem with neural networks in medical applications is data availability for training. Different techniques are investigated to maximize the classification performance with a limited training dataset. Cascaded neural networks in combination with digital twins improve the results while lowering the needed training dataset size. The preliminary data indicates that our technology might lead to a paradigm shift in brain tumor diagnosis and therapy due to the accurate result, the versatile design, and being low-cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To effectively manage inoperable deep-seated brain diseases, a high-resolution diminutive endoscope is required. This endoscope should be capable of precisely localizing and evaluating lesions in vivo. In this study, we introduce an ultrathin robotic OCT neuroendoscope designed for minimally invasive and targeted imaging in the deep brain. The neuroendoscope, measuring only 0.6mm in diameter, is fabricated by coupling a custom micro-lens on the fiber tip. This fabrication technique enables high resolution imaging of 2.4μm × 4.5μm in the axial and transverse directions. To ensure precise trajectory planning and accurate lesion localization within the brain, we have developed a skull-mounted robotic neuroendoscope positioner, allowing for a localization accuracy of approximately 1mm. To demonstrate the capabilities of our technology, we have utilized electromagnetic tracking technology to enable us to control and navigate the neuroendoscope, allowing for the precise localization and imaging of targets within a brain phantom. The new technology holds significant potential to translate OCT neuroendoscopy into clinical practice for deep brain conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Healthy brain tissue can be damaged or made ischemic by retractors during neurosurgery. Our research focuses on developing a tool that will alert surgeons of conditions that could cause these issues. To do this, we have employed optical sensors that transmit light into tissue and quantify the amount reflected light. This study focuses on the feasibility of measuring force applied to a tissue using optical signals. Using machine learning, we have been able to predict the force applied on the optical sensor by a finger. Future work will focus on developing an algorithm using porcine brain data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual staining creates H&E-like images with minimal tissue processing. Typically, two channels are used, but single-channel staining is attractive for techniques like reflectance confocal microscopy (RCM). Our study trains a deep learning model to generate H&E images from single-channel RCM using pixel-level registration. Porcine skin was stained with acridine orange, SR101, and aluminum chloride, and confocal microscopy images were acquired. Using pix2pixGAN, we trained the model on grayscale RCM images, producing virtual stained images that closely resembled the ground truth. We showed some model output examples and used image assessment metrics to evaluate model performance. This technique has potential for in vivo surgical applications, eliminating the need for image registration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Real-time fringe projection profilometry (FPP) is developed as a 3D vision system to plan and guide autonomous robotic intestinal suturing. Conventional FPP requires sinusoidal patterns with multiple frequencies, and phase shifts to generate tissue point clouds, resulting in a slow frame rate. Therefore, although FPP can reconstruct dense and accurate tissue point clouds, it is often too slow for dynamic measurements. To address this problem, we propose a deep learning-based single-shot FPP algorithm, which reconstructs tissue point clouds with a single sinusoidal pattern using a Swin-Unet. With this approach, we have achieved a FPP imaging frame rate of 50Hz while maintaining high point cloud measurement accuracy. System performance was trained and evaluated both by synthesized and an experimental dataset. An overall relative error of 1~3% was achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Minimally invasive surgery (MIS) has expanded broadly in the field of abdominal and pelvic surgery. However, there are still prevalent issues surrounding intracorporeal surgery, such as iatrogenic injury, anastomotic leakage, or the presence of positive tumor margins after resection. Current approaches to address these issues and advance laparoscopic imaging techniques often involve fluorescence imaging agents, such as indocyanine green (ICG), to improve visualization, but these have drawbacks. Hyperspectral imaging (HSI) is an emerging optical imaging modality that takes advantage of spectral characteristics of different tissues. Various applications include tissue classification and digital pathology. In this study, we developed a dual-camera system for high-speed hyperspectral imaging. This includes the development of a custom application interface and corresponding hardware setup. Characterization of the system was performed, including spectral accuracy and spatial resolution, showing little sacrifice in speed for the approximate doubling of the covered spectral range, with our system acquiring 29 spectral images from 460 to 850nm. Reference color tiles with various reflectance profiles were imaged and a RMSE of 3.56 ± 1.36% was achieved. Sub-millimeter resolution was shown at 7cm working distance for both hyperspectral cameras. Finally, we image ex vivo tissues, including porcine stomach, liver, intestine, and kidney with our system and use a high-resolution, radiometrically calibrated spectrometer for comparison and evaluation of spectral fidelity. The dual-camera hyperspectral laparoscopic imaging system can have immediate applications in various surgeries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cilia beat frequency (CBF) is an essential measure of fallopian tube function. In the present study, we adapted a previously developed functional optical coherence tomography (OCT) method, to spatially map and quantify CBF in one human sample ex vivo. Time lapsed image sets at different locations were acquired using OCT (n=8) and 50x magnification brightfield (BF) microscopy (n=2) as ground truth. A sliding window Fourier analysis was performed on OCT and BF image sets to quantify CBF on a pixel-by-pixel basis. Parameters were optimized to maximize contrast of cilia, and were uniformly applied for all OCT image sets. CBF color maps were created and were qualitatively compared to unprocessed OCT intensity image sets to evaluate the spatial mapping accuracy of CBF values. Line plots of amplitude vs. frequency at 1 second intervals, and pixel peak frequency histograms of whole image sets, were created to visualize the dominant CBFs. An analysis of variance was used to compare CBFs as measured with OCT and BF microscopy. Our results revealed qualitatively accurate spatial mapping of non-zero CBF values to pixels in ciliated areas, which were visibly appreciable on unprocessed intensity image sets. There was no significant difference between the dominant CBFs as measured with OCT and ground truth BF microscopy (3.2 ± 1.6Hz, 3.2 ± 0, p=0.97). Bulk sample movement was a significant source of temporal variability and amplified high-frequency noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our research highlights the potential of Diffuse Reflectance Spectroscopy (DRS) in detecting cortical breaches during pedicle screw placement. We propose a sideways-looking fiber-optic probe, integrating diffuse light emission with both forward and sideways light collection. Experiments on an optical tissue phantom validate the probe’s potential to distinguish bone tissues and provide real-time guidance for spine surgery. Our findings prove that DRS with diffuse emission can detect perpendicular breaches, and demonstrate how the integration of a 45° slanted fiber coated with gold enables parallel breach detection, advancing spine surgery by allowing for accurate pedicle screw placement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We develop a method to measure liver steatosis percentage through relative quantification of fat with near infrared diffuse reflectance spectroscopy. The results obtained show a good correlation with the gold standard anatomopathological analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fluorescence imaging is widely used in many different domains, one of which is fluorescence-guided surgery. The temporal behavior called fluorescence lifetime gives deeper insights, making it an additional tool. Current fluorescence-guided surgery imaging typically utilizes two different cameras for natural color and NIR fluorescence imaging, making the system complex and expensive. We propose an alternative approach of sequential RGB+NIR fluorescence lifetime imaging and overlaying the images on each other. Sequential RGB+NIR fluorescence imaging is achieved with a high-speed time-gated camera combined with an RGB LED ring illumination for natural color imaging and a picosecond NIR pulsed laser source for fluorescence lifetime imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Functional lymphatics are essential for removal and transport of cellular waste and excess fluid from regional tissues and are dependent upon contractile lymphangion activity for proximal drainage into the venous blood stream. Lymphatic insufficiency in patients with cancer-acquired lymphedema is manifested by progressive dermal backflow, or retrograde flow of lymph into dermal lymphatic capillaries. Prior studies using near-infrared fluorescence (NIRF) lymphatic imaging with ICG as a contrast agent, found that dermal backflow provides early indication of lymphedema onset which, when untreated, persists over months and years, but, with two weeks of physiotherapy, dermal backflow could be resolved or reduced in early head and neck cancer survivorship. Thus, the extent or area of dermal backflow may provide an accurate, longitudinal measure of progressing/improving lymphatic dysfunction. In this work, we develop hardware and software solutions to automate determination of dermal backflow area on 3-dimensional tissue surface profiles for entry into the medical record. Specifically, we incorporate a stereo depth module into our custom NIRF lymphatic imaging system for simultaneous acquisition of depth, color, and NIRF images. Using camera calibration techniques, NIRF images are mapped onto point clouds derived from depth images. Algorithms for image segmentation of dermal backflow and stitching of multiple point clouds for more complete representation of dermal backflow across complex 3-dimensional tissue surfaces are described. Non-clinical testing demonstrates ±3% errors in dermal backflow area determination, with clinical testing on head and neck cancer survivors underway to assess efficacy of physiotherapies provided early after cancer treatment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Label-free visualizing the prediabetic microenvironment of adipose tissues provides a less invasive alternative for the characterization of insulin resistance (IR) and inflammatory pathology. Here, we successfully identified the differentiable features of prediabetic adipose tissues by employing the metabolic imaging of three endogenous fluorophores NAD(P)H, FAD, and lipofuscin-like pigments. We discovered that 1040-nm excited lipofuscin-like autofluorescence could mark the location of macrophages. This unique feature helps separate the metabolic fluorescence signals of macrophages from those of adipocytes. In prediabetes fat tissues with IR, we found only adipocytes exhibited a metabolic fluorescence profile different from that of normal adipocytes.When mice have inflamed fat tissues, both adipocytes and macrophages possess this kind of metabolic change. Based on spatial fluorescence metabolomics, we developed an innovative approach to diagnose prediabetes, providing insights into diabetes prevention strategies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current method for monitoring patients for cancer recurrence after treatment requires patients to travel to a centralized laboratory, causing time in scheduling appointments/waiting for results, financial burden in travel costs to clinics, and invasive procedures (i.e., biopsies) leading to discomfort in patients. To improve convenience, outcomes, and enable more frequent monitoring of cancer recurrence, we propose using an implantable hydrogel sensor for remote cancer surveillance. Gold nanostars (GNS), efficient plasmonic nanomaterials, embedded in hydrogels, enhance Raman scattering signals of cancer biomarkers. A handheld Raman spectroscopy probe collects these signals, representing the unique vibrational molecular fingerprint. Toward this effort, this study demonstrates the performance of a GNS-embedded hydrogel for discriminating serum in two preclinical mouse prostate cancer models: NSG and C57BL/6J mice. GNS labeled with 4- mercaptobenzoic acid (4-MBA) were embedded in 70μL hydrogels. Six serum samples from NSG mice (3 with LNCaP subcutaneous tumors, 3 normal) and eight serum samples from C57BL/6J mice (3 wild type, 5 transgenic modified– TRAMP with prostate cancer) were obtained. Serum (70μL) was incubated overnight (4°C) with the hydrogel sample. Raman spectra were collected at five distinct locations using the Raman handheld probe. Spectral analysis involved intensity normalization, principal component analysis (PCA) for dimension reduction, and linear discriminate analysis (LDA) for classification with leave-one-spectra-out cross-validation. NSG mice exhibited band differences at 775-825 cm-1, 1202-1249 cm-1, and 1430-1478 cm-1 (LDA ROC AUC = 0.83), while C57BL/6J mice showed differences at 1152-1245 cm-1 and 1362-1407 cm-1 (LDA ROC AUC = 0.98). Successful discrimination of serum in mouse models demonstrates the presence of biomarkers that differentiate cancer-bearing mice and the potential for remote cancer monitoring.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Early cancer detection is critical for successful treatment. Current cancer detection methods require travel to a centralized laboratory for testing which can be time-consuming, costly, invasive, and infrequent. Patients could benefit from a less invasive method to monitor recurrence which could be performed more frequently from the comfort of their home. We propose an implanted hydrogel sensor for remote cancer monitoring. Gold nanostars (GNS) embedded within the hydrogel produce surface enhanced Raman scattering signals of cancer biomarkers collected remotely using a handheld probe, with the results being sent to their provider. Here, we present results demonstrating the ability to discriminate human prostate cancer plasma. GNS were labeled with 4-mercaptobenzoic acid (4-MBA) and embedded into 70μL hydrogels. Four prostate cancer samples and five non-prostate cancer samples were obtained from a biobank. 70μL of each sample were combined with one hydrogel per sample and incubated overnight at 4°C. A handheld probe was used to collect Raman spectra at 5 different locations across each hydrogel face. The classification algorithm included intensity normalization based on intensity of the 4MBA signal, principal component analysis (PCA) for dimension reduction, and linear discriminate analysis (LDA) or logistic regression for classification with leave-one-sample- out cross validation. Comparison of cancer and non-cancer spectra shows relative peak intensity differences between the two groups including at 726cm-1 and 1450cm-1. The area under the ROC curve was up to 0.94 for logistic regression. Results show the potential of remote cancer monitoring with a hydrogel SERS sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The rising incidence of skin cancer, in particular melanoma, highlights the need for improved detection techniques. Determining the maximal depth of a lesion is crucial for planning excision margins and optimizing treatment outcomes. However, the current gold standard diagnostics, such as excision and histological examination, are invasive, time-consuming, and may not accurately measure the deepest position of the lesion. Preoperative knowledge of lesion size and depth would greatly assist surgical planning and enhance the likelihood of achieving tumor-free excision margins. In this work, we report on an integrated imaging system that combines ultrasound (US) and photoacoustic tomography (PAT) into a single scanning unit, enabling fast and non-invasive co-localized measurements. Our system facilitates C-mode imaging, providing visualization of lesion depth across its entire size. The design of the setup offers a clear optical window, allowing for integration with other optical modalities. We conducted in vivo measurements on suspicious human skin nevi promptly followed by excision. The combined US/PAT imaging technique demonstrated a strong correlation with histological Breslow thickness, a key parameter for lesion depth. These results highlight the potential of combined US and PAT as a promising non-invasive 3D imaging approach for evaluating human nevi and other skin lesions. By correlating our imaging data with corresponding histological findings, we aim to increase the accuracy and demonstrate the clinical utility of the integrated ultrasound and photoacoustic tomography approach in non-invasive 3D imaging of human melanocytic and other skin lesions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the minimally invasive cardiac surgery procedures is valvuloplasty for mitral regurgitation. Valvuloplasty uses an artificial tendon cord to replace the torn tendon cord and a valvuloplasty ring to correct the enlargement of the valve ring. The appropriate positions for embedding the artificial tendon cord and the annuloplasty ring are marked with dyes during surgery, but the endoscopic view is narrow and may be obscured by surgical instruments. In this study, we propose a system that estimates and displays the position of hidden markers using the coordinates of the center of gravity of each marker and the positional relationship between markers during the surgery. First, the spectral reflectance is estimated from color images obtained from a stereo endoscope and the marker areas are extracted. Next, the 3D information of the marker area is estimated using the stereo method, and the position of the center of gravity is calculated for each marker. Then, inter-frame matching is performed using the contours of the markers to detect hidden markers. Finally, the relationship between the calculated center-of-gravity positions for each marker is used to estimate the center of gravity positions of the hidden markers. The effectiveness of the proposed method was confirmed by estimating the center of gravity of markers in the blind spot using an image of a marker drawn on a pig's heart fragment, which was designed to look like a valve. The proposed system can compensate for the blind spots by estimating the position of the hidden markers during surgery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For several decades after corneal transplantation was performed for the first time, studies to predict the success of corneal transplantation have been conducted. To obtain a successful corneal transplantation, various factors other than biocompatibility between the donor cornea and the transplant recipient's eye must be satisfied. Therefore, various studies are being conducted to develop an artificial cornea that does not require a donor. One of the important indicators contributing to the success of corneal transplantation is measurement of corneal thickness (CT) after corneal transplantation. In previous studies, to measure the CT and transplanted cornea, partial CT measurement using an algorithm was mainly performed in optical coherence tomography (OCT) images. However, a single algorithm eventually has limitations in determining the suitability of the entire transplanted cornea. In this study, we automatically segmented the region of the artificial cornea implanted in the rabbit cornea through U-Net based models, and based on this, we measured and analyzed the three-dimensional total thickness of the conventional cornea and the artificial cornea. Our results suggest that the thickness of the transplanted and existing corneas can be automatically measured over time to provide information as an indicator for determining the success of corneal transplants.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lymph node (LN) metastasis is one of the most important prognostic factors in several common malignancies such as
gastric cancer and breast cancer. The frozen section method is widely used for intraoperative pathological diagnosis.
However, there are some issues with this process. In other words, experience is essential for specimen preparation and
diagnosis, and freezing causes severe tissue damage. Microscopy with ultraviolet surface excitation (MUSE) has
potential to provide rapid diagnosis with simple technique comparing to conventional histopathology based on
hematoxylin and eosin (H&E) staining. We established a fluorescent staining protocol for Deep UV-excitation
fluorescence imaging by using terbium ion and Hoechst 33342 that has enabled clear discrimination of nucleoplasm,
nucleolus, and cytoplasm. In formalin-fixed paraffin-embedded (FFPE) thin-sliced tissue sections of metastasis-positive/-
negative LNs of gastric cancer patients, the performance of cancer detection by patch-based training with a
deep convolutional neural network (DCNN) on the fluorescence images was comparable with that of H&E images.
However, MUSE images from non-thin-sliced tissue are difficult for pathologists to label training data for a supervised
learning manner. We attempt a deep-learning pipeline model for LN metastasis detection, in which CycleGAN translates
MUSE images to FFPE thin-sliced tissue images, and diagnostic prediction is performed using deep convolutional neural
network trained on FFPE images. The modality translation using CycleGAN was able to improve the pathological
diagnosis of non-thin-sliced surface images using DCNN model trained by FFPE images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We employed a Pix2Pix generative adversarial network to translate the multispectral fluorescence images into colored brightfield representations resembling H&E staining. The model underwent training using 512x512 pixel paired image patches, with a manually stained image serving as the reference and the fluorescence images serving as the processing input. The baseline model, without any modifications, did not achieve high microscopic accuracy, manifesting incorrect color attribution to various biological structures and the addition or removal of image features. However, through the substitution of simple convolutions with Dense convolution units in the U-Net Generator, we observed an increase in the similarity of microscopic structures and the color balance between the paired images. The resulting improvements underscore the potential utility of virtual staining in histopathological analysis for veterinary oncology applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.