Although the many advantages of Interventional Radiology not only being a minimally invasive surgery but also providing minimal risk of infection and better recuperation for the patient. However, this procedure can cause serious damage (cancer or burnt skin) to the patient and especially to the surgeons if they are exposed for long periods to the X-ray radiation. In the state of art, it has been found new remote catheter navigation system in which the equipment uses magnetic fields for controlling and moving the catheter from an external cabin. Additionally, large equipment needs to be installed in the operating room. In order to limit the doses of X-rays without installing large equipment in the operating room, the aim is to decrease the images coming from X rays imagers using sensors that can be integrated into the catheter (like Fibber Bragg Gratings Sensors inside the catheter or MEMS sensors) to reconstruct an image without the need of continuous imaging. In order to do that, accurate and reliable information on the position of the catheter is required to correct the drift of the catheter's sensors. This position can therefore be obtained by image processing on X-ray images (noisy with artefacts). Previous work done by the Medic@ team has shown that conventional image processing approaches are generally too slow or not precise enough. The use of a U-Net convolutional neural network is then a possible solution for detecting the entire catheter (body and head) and obtaining the coordinates of its end. In this article, we will explain and show our first results using the U-net architecture for detecting the tip and the body of the catheter and a kalman filter used for data fusion to evaluate its efficiency to reducing the quantity of images needed in a curvilinear vessel, using generated data.
Touchless Human-Computer Interaction (HMI) is important in sterile environments, especially, in operating rooms (OR). Surgeons need to interact with images from scanners, rayon X, ultrasound images, etc. Problems about contamination may happen if surgeons must touch a keyboard or the mouse. To reduce the contamination and to give the possibility to the surgeon to be more autonomous during the operation, different projects have been developed in the Medic@ team from 2011. In order to recognize the hand and the gestures, two main projects: Gesture Tool Box and K2A; based on the use of the Kinect’s device (with a depth camera) have been prototyped. The detection of the hand gesture was done by segmentation and hand descriptors on RGB images, but always with a dependency on the depth camera (Kinect) to the detection of the hand. Additionally, this approach does not give the possibility that the system adapts to a new gesture demanded by the end-user, for example, if a new gesture is demanded, a new algorithm must be programed and tested. Thanks to the evolution of NVDIA cards to reduce time processing algorithms for CNN, the last approach explored was the use of the deep learning algorithms. The Gesture tool box project done was to analyze the hand gesture detection using a CNN (pre-trained in VGG 16) and transfer learning. The results were very promising showing 85% of accuracy for the detection of 10 different gestures form LSF ( French Sign Language) and also it was possible to create a user interface to give autonomy to the end user to add his own gesture and to do the transfer learning automatically. However, we still had some problems about the real time delay (0,8s) recognition and the dependency of the Kinect device. In this article, a new architecture is proposed, in which we want to use standard cameras and to reduce the real time delay of the hand and gesture detection. The state of the art shows the use of a YOLOv2 using Darknet framework as a good option with faster time recognition compared to other CNN. We have implemented YOLOv2 for the detection of the hand and signs with good results in gesture detection and with 0.10 seconds on gesture time recognition in laboratory conditions. Future work will include reducing the errors of our model, recognizing intuitive and standard gestures and doing tests in real conditions.