Catheter tubes and lines are one of the most common abnormal findings on a chest x-ray. Misplaced catheters can cause serious complications, such as pneumothorax, cardiac perforation, or thrombosis, and for this reason, assessment of catheter position is of utmost importance. In order to prevent these problems, radiologists usually examine chest x-rays to evaluate their positions after insertion and throughout intensive care. However, this process is both time-consuming and prone to human error. Efficient and dependable automated interpretations have the potential to lower the expenses of surgical procedures, lessen the burden on radiologists, and enhance the level of care for patients. To address this challenge, we have investigated the task of accurate segmentation of catheter tubes and lines in chest x-rays using deep learning models. In this work, we have utilized transfer learning and transformer-based networks where we utilized two different models: a U-Net++-based model with ImageNet pre-training and an efficientnet encoder, which leverages diverse visual features in ImageNet to improve segmentation accuracy, and a transformer-based U-Net architecture due to its capability to handle long-range dependencies in complex medical image segmentation tasks. Our experiments reveal the effectiveness of the U-Net++-based model in handling noisy and artifact-laden images and TransUNET’s potential for capturing complex spatial features. We compare both models using the dice coefficient as the evaluation metric and determine that U-Net++ outperforms TransUNET in terms of these segmentation metrics. Our aim is to achieve more robust and reliable catheter tube detection in chest x-rays, ultimately enhancing clinical decision-making and patient care in critical healthcare settings.
|