Although the many advantages of Interventional Radiology not only being a minimally invasive surgery but also providing minimal risk of infection and better recuperation for the patient. However, this procedure can cause serious damage (cancer or burnt skin) to the patient and especially to the surgeons if they are exposed for long periods to the X-ray radiation. In the state of art, it has been found new remote catheter navigation system in which the equipment uses magnetic fields for controlling and moving the catheter from an external cabin. Additionally, large equipment needs to be installed in the operating room. In order to limit the doses of X-rays without installing large equipment in the operating room, the aim is to decrease the images coming from X rays imagers using sensors that can be integrated into the catheter (like Fibber Bragg Gratings Sensors inside the catheter or MEMS sensors) to reconstruct an image without the need of continuous imaging. In order to do that, accurate and reliable information on the position of the catheter is required to correct the drift of the catheter's sensors. This position can therefore be obtained by image processing on X-ray images (noisy with artefacts). Previous work done by the Medic@ team has shown that conventional image processing approaches are generally too slow or not precise enough. The use of a U-Net convolutional neural network is then a possible solution for detecting the entire catheter (body and head) and obtaining the coordinates of its end. In this article, we will explain and show our first results using the U-net architecture for detecting the tip and the body of the catheter and a kalman filter used for data fusion to evaluate its efficiency to reducing the quantity of images needed in a curvilinear vessel, using generated data.
In operating rooms (OR), physicians have to work in strict compliance to asepsis rules to not endanger the health of patients. In the case of laparoscopic mini-invasive surgery, where the field of view of the surgeon is restricted, the use of computers is necessary to provide the missing information. Physicians must therefore interact with computers either by directly manipulating mouse and keyboard, by removing their gloves or by using a protection for the devices, or by voicecommanding an assistant to do so. However, in addition to be time-consuming, it may cause hygiene issues in the first case and a lack of precision in the second. The need to have better way of interactions with computers had led to important researches in that area during the last ten years, especially in Touchless Human Machine Interaction (THMI). Indeed, THMI, including gesture recognition, voice recognition and eye-tracking, has a promising future in the medical field, allowing surgeons to interact by themselves with devices, thereby avoiding error-prone process while complying to asepsis rules. In this context, the “Intelligent Touchless Glassless Human-Machine Interface” (ITG-HMI) project aims to provide a new tool for viewing and manipulating 3D objects. In this article, we present how this interface was implemented, through the detection and recognition of hand gestures using Deep Learning, the establishment of a graphical interface to display 3D models and the adaptation of gestures recognized in actions to achieve.
Medical field has always benefited from the latest technological headways such as radiography, robotics or more recently augmented reality. Indeed, the progress in image analysis and augmented reality have led to major therapeutic progress in the surgical field as well as in the diagnosis field. Thus, one of the most important technique of medical image analysis is the registration. Image registration is the process of matching two or more images. More concretely, it consists in finding the transformation that minimizes the difference between two or more images. The transformation can be rigid (composed of rotations and translations only), affine (composed of rotations, translations and scales), or non-rigid. Even though rigid registration can seem quite easy to perform, developing and implementing solutions that realize fast, precise and robust rigid registration on complex objects is still challenging, especially when we deal with 3D objects. One of the most known and used rigid-registration algorithm is the Iterative Closest Point algorithm that has been implemented notably by the Open3D library. However, this method was unable to handle non-rigid registration. That is the reason why we have decided to use the Coherent Point Drift algorithm with non-rigid deformations. To this end, we have used the PyCPD library. In this paper, we present an efficient method for non-rigid registration applied to deformed liver models, robust to translations, rotations and cropping even though it fails to handle the most complex cases.
Touchless Human-Computer Interaction (HMI) is important in sterile environments, especially, in operating rooms (OR). Surgeons need to interact with images from scanners, rayon X, ultrasound images, etc. Problems about contamination may happen if surgeons must touch a keyboard or the mouse. To reduce the contamination and to give the possibility to the surgeon to be more autonomous during the operation, different projects have been developed in the Medic@ team from 2011. In order to recognize the hand and the gestures, two main projects: Gesture Tool Box and K2A; based on the use of the Kinect’s device (with a depth camera) have been prototyped. The detection of the hand gesture was done by segmentation and hand descriptors on RGB images, but always with a dependency on the depth camera (Kinect) to the detection of the hand. Additionally, this approach does not give the possibility that the system adapts to a new gesture demanded by the end-user, for example, if a new gesture is demanded, a new algorithm must be programed and tested. Thanks to the evolution of NVDIA cards to reduce time processing algorithms for CNN, the last approach explored was the use of the deep learning algorithms. The Gesture tool box project done was to analyze the hand gesture detection using a CNN (pre-trained in VGG 16) and transfer learning. The results were very promising showing 85% of accuracy for the detection of 10 different gestures form LSF ( French Sign Language) and also it was possible to create a user interface to give autonomy to the end user to add his own gesture and to do the transfer learning automatically. However, we still had some problems about the real time delay (0,8s) recognition and the dependency of the Kinect device. In this article, a new architecture is proposed, in which we want to use standard cameras and to reduce the real time delay of the hand and gesture detection. The state of the art shows the use of a YOLOv2 using Darknet framework as a good option with faster time recognition compared to other CNN. We have implemented YOLOv2 for the detection of the hand and signs with good results in gesture detection and with 0.10 seconds on gesture time recognition in laboratory conditions. Future work will include reducing the errors of our model, recognizing intuitive and standard gestures and doing tests in real conditions.
Proc. SPIE. 9786, Medical Imaging 2016: Image-Guided Procedures, Robotic Interventions, and Modeling
KEYWORDS: Surgery, Cameras, Video, Field programmable gate arrays, Video surveillance, Computer vision technology, Augmented reality, Signal processing, Video compression, Machine vision, Video processing, Embedded systems, Medical devices, Laparoscopy
Hybrid operating rooms are an important development in the medical ecosystem. They allow integrating, in the same procedure, the advantages of radiological imaging and surgical tools. However, one of the challenges faced by clinical engineers is to support the connectivity and interoperability of medical-electrical point-of-care devices. A system that could enable plug-and-play connectivity and interoperability for medical devices would improve patient safety, save hospitals time and money, and provide data for electronic medical records. In this paper, we propose a hardware platform dedicated to collect and synchronize multiple videos captured from medical equipment in real-time. The final objective is to integrate augmented reality technology into an operation room (OR) in order to assist the surgeon during a minimally invasive operation. To the best of our knowledge, there is no prior work dealing with hardware based video synchronization for augmented reality applications on OR. Whilst hardware synchronization methods can embed temporal value, so called timestamp, into each sequence on-the-y and require no post-processing, they require specialized hardware. However the design of our hardware is simple and generic. This approach was adopted and implemented in this work and its performance is evaluated by comparison to the start-of-the-art methods.