Chondrocyte viability is an important measure to consider when assessing cartilage health. Dye-based cell viability assays are not suitable for in vivo or long-term studies. We have introduced a non-labeling viability assay based on the assessment of high-resolution images of cells and collagen structure using two-photon stimulated autofluorescence and second harmonic generation microscopy. By either the visual or quantitative assessment, we were able to differentiate living from dead chondrocytes in those images. However, both techniques require human participation and have limited throughputs. Throughput can be increased by using methods for automated cell-based image processing. Due to the poor image contrast, traditional image processing methods are ineffective on autofluorescence images produced by nonlinear microscopes. In this work, we examined chondrocyte segmentation and classification using Mask R-CNN, a deep learning approach to implement automated viability analysis. It has been demonstrated an 85% accuracy in chondrocyte viability assessment with proper training. This study demonstrates that automated and highly accurate image analysis is achievable with the use of deep learning methods. This image processing approach can be helpful to other imaging applications in clinical medicine and biological research.
In the recent studies of cartilage imaging with nonlinear optical microscopy, we discovered that autofluorescence of chondrocytes provided useful information for the viability assessment of articular cartilage. However, one of the hurdles to apply this technology in research or clinical applications is the lack of image processing tools that can perform automated and cell-based analysis. In this report, we present our recent effort in the cell segmentation using deep learning algorithms with the second harmonic generation images. Two traditional segmentation methods, adaptive threshold, and watershed, were used to compare the outcomes of different methods. We found that deep learning algorithms did not show a significant advantage over the traditional methods. Once the cellular area is determined, the viability index is calculated as the intensity ratio between two autofluorescence channels in the cellular area. We found the viability index correlated well with the chondrocyte viability. Again, deep learning segmentation did not show a significant difference from the traditional segmentation methods in terms of the correlation.
Covering the ends of long bones, articular cartilage provides a smooth, lubricated surface to absorb impact and distribute loads during movement so that underlying bone is protected. This function is facilitated by a complex and well-organized extracellular matrix (ECM). Being the only cell type in articular cartilage, chondrocytes are responsible for maintaining the homeostasis of the cartilage ECM; as such, the viability of chondrocytes is a critical parameter to reflect the quality of the cartilage. Most prevalent cell viability assays rely on dye staining and thereby cannot be performed for longitudinal monitoring or in-vivo assessment. Here we demonstrate that two-photon autofluorescence (TPAF) microscopy distinguishes live cells from dead cells in intact ex-vivo cartilage tissues, which provides a non-invasive method to assess cell viability. In our study, the endogenous fluorophores such as nicotinamide adenine dinucleotide (phosphate) (NAD(P)H) and flavin adenine dinucleotide (FAD) were used to image chondrocytes in cartilages on rat tibia condyles immediately after harvesting. Second harmonic generation (SHG) imaging was also performed to examine the integrity of the extracellular and pericellular matrix. On the same tissue, the cell viability assay with Calcein-AM and Ethidium homodimer-1 (EthD-1) labeling was used as a gold standard to identify live or dead cells. We found that live cells presented stronger NAD(P)H fluorescence than dead cells in general and were readily identified if pseudo colors were used for showing two-channel images. Owing to its non-destructive nature, this method holds the potential value in assessing cell viability of engineered or living tissues without dye labeling.
Diffuse reflectance standards of known hemispherical reflectance Rh are widely used in optical and imaging studies. We have developed a stochastic surface model to investigate light reflection and roughness dependence. Through Monte Carlo simulations, the angle-resolved distributions of reflected light have been modeled as the results of local surface reflection with a constant reflectance Rs representing the overall ability of a reflectance standard. The surface was modeled by an ensemble of random Gaussian surface profiles parameterized by a mean surface height δ and transverse correlation length a. By decreasing δ / a, the calculated reflected light distributions were found to transit from Lambertian to specular reflection regime. Reflected light distributions were measured with three standards by nominal reflectance Rh valued at 10%, 80%, and 99%. The calculated results agree well with the measured data in their angular distributions at different incident angles by setting Rs = Rh and δ = a = 3.5 μm.
Bessel beams have been used in many applications due to their unique optical properties of maintaining their intensity profiles unchanged during propagation. In imaging applications, Bessel beams have been successfully used to provide extended focuses for volumetric imaging and uniformed illumination plane in light-sheet microscopy. Coupled with two-photon excitation, Bessel beams have been successfully used in realizing fluorescence projected volumetric imaging. We demonstrated previously a stereoscopic solution–two-photon fluorescence stereomicroscopy (TPFSM)–for recovering the depth information in volumetric imaging with Bessel beams. In TPFSM, tilted Bessel beams were used to generate stereoscopic images on a laser scanning two-photon fluorescence microscope; upon post image processing we could successfully provide 3D perception of acquired volume images by wearing anaglyph 3D glasses. However, tilted Bessel beams were generated by shifting either an axicon or an objective laterally; the slow imaging speed and severe aberrations made it hard to use in real-time volume imaging. In this article, we report recent improvements of TPFSM with newly designed scanner and imaging software, which allows 3D stereoscopic imaging without moving any of the optical components on the setup. This improvement has dramatically improved focusing qualities and imaging speed so that the TPFSM can be performed potentially in real-time to provide 3D visualization in scattering media without post image processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.