Low-light image enhancement has posed a significant challenge in recent years due to non-uniform luminance in real-world images. Color restoration, luminance mapping, and estimation of the curve and levels are some techniques used by the algorithms to enhance the image. However, real-world images often have non-uniform luminance, requiring local enhancement in certain areas rather than global enhancement. In order to tackle this issue, this paper introduces a novel methodology based on deep learning, employing two convolutional network architectures. The first one classifies the brightness level of the input image, while the second one enhances the brightness level based on information obtained through the first architecture. To train this model, two commonly used datasets in state-of-the-art research are used: the LOL (Low-Light) and Synthetic Low-light. Both datasets contain low-light and ground-truth image pairs, which makes it possible to make a proper estimate between non-uniform and uniform luminosity. The proposed algorithm is applied over resized images from the UHD-LOL4k dataset, with a performance evaluation through the metrics: Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), Image Quality Evaluator (NIQE), and Blind /Unreferenced Image Spatial Quality Evaluator (BRISQUE). According to the results, the proposed method outperforms algorithms with more complex architectures in the literature. The double convolutional architecture emphasizes local enhancement in real-world scenes and global enhancement in images with very low luminosity. Overall, this paper presents a significant contribution to low-light image enhancement offering an effective solution to the challenges posed by non-uniform luminance in real-world images.
Through optical equipment such as the ophthalmoscope, it is possible to visualize and image the inner surface of the eye, where the main structures of the retina can be observed. The visual analysis of the retinal vasculature is widely used by ophthalmologists for prevention, diagnosis, and monitoring of retinal diseases. Nevertheless, derived from pathologies that generate an opacity in the crystalline lens (such as cataracts), the task of visualize blood vessels becomes difficult, since there is a lack of contrast in the fundus image. In this work, a multiscale decomposition method based on the Weighted Least Squares (WLS) optimization is applied to cataractous eye fundus images, with the aim of obtaining a better blood-vessel to background contrast. The proposed scheme is implemented over a publicly-available cataract eye fundus dataset. The experimental results provide a notorious visual improvement in contrast and restoration of blood vessels pixels and, in addition, maintains adequate saturation and lighting for visual analysis. The visual improvement of the vasculature represents a potential benefit in the ophthalmic analysis of patients with cataracts, since it is possible to observe the vascular morphology in greater detail while keeping relevant image features.
In recent years, the segmentation and projection techniques of different structures of medical interest have had significant growth due to its usefulness; doctors have been using them as tools for the diagnosis and evolution of different diseases. The segmentation of the Inner Limiting Membrane (ILM) in retinal scans acquired using the Optical Coherence Tomography (OCT) imaging technique has generated particular interest in the medical area since it provides clinically relevant information about diseases such as Glaucoma, Diabetic Macular Edema (DME) or Multiple Sclerosis. Furthermore, the generation of a surface that shows the current morphological situation of the scanned retinal area is a tool that complements the medical analysis. In this paper, a new methodology for the ILM segmentation on OCT retinal images and a surface projection from different axially spaced scans acquired over the macular and peripapillary zone is presented. The proposed scheme consists of a wavelet-based denoising step and a contrast enhancement stage using Eigenvalues of Hessian matrix, while the segmentation process is based on the Canny edge detection algorithm; these stages are applied to each image of a C-scan for a later surface generation using Cubic spline interpolation. This method is applied to a publicly available OCT data-set composed of 22 patients with several retinal diseases obtaining a correct individual segmentation of each image, while the surface generation results demonstrate high performance in the visualization of the ILM morphology, which can be used for dimensional analysis of this membrane.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.