The rapid progress in deep learning, particularly in convolutional neural networks (CNNs), has significantly enhanced the effectiveness and efficiency of hyperspectral image (HSI) classification. While CNN-based approaches excel in enriching local features, they often struggle to capture long-range dependencies in sequential data. To address this limitation, an attention mechanism can be integrated with CNN architectures to capture both global and local rich representations. Transformer architectures and their variations, known for their ability to model long-distance dependencies in sequential data, have gradually found applications in HSI classification tasks. Recently, the Retentive Network (RetNet) has emerged, claiming to offer superior scalability and efficiency compared to traditional transformers. One pivotal distinction between the self-attention operator in the Transformer and the retention mechanism in RetNet lies in the introduction of a decay parameter. This parameter explicitly regulates the attention weights assigned to each token by considering its neighboring tokens, resulting in improved performance. However, no study has been reported to show the effectiveness of RetNet for HSI analysis. In this study, we incorporate the retention mechanism and progressive neuron expansion structure into the task of pixel-wise HSI classification, and thus we name our proposed method as Retentive Progressive Expansion Network (R-PEN). Experimental analyses conducted on real-world hyperspectral image datasets have shown that the R-PEN model surpasses other pertinent deep learning models in classification performance.
Underwater imagery often exhibits significant degradation and poor quality as compared to outdoor imagery. To compensate for this, Single-Image Super-Resolution (SISR) and enhancement algorithms are used to lessen this degradation and produce high-resolution images. In this study, we apply state-of-the-art Simultaneous Enhancement and Super-Resolution (SESR) and SISR models to different sets of downscaled images from the comprehensive RUOD dataset. We then conduct a qualitative and quantitative analysis of the upscaled and enhanced images using standard underwater image quality metrics (IQMs). Subsequently, we evaluate the robustness of the state-of-the-art YOLO-NAS detector against image sets with varying downscaled spatial resolutions. Lastly, we examine the impact that the SISR and SESR models has on YOLO-NAS detector performance. The findings reveal a decline in the detection performance on the downscaled test images and a further decline on the upscaled and enhanced images produced by SISR and SESR models, suggesting a negative relationship between such models and detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.