Digital tomosynthesis (DTS) often suffers from slow reconstruction speed due to the high complexity of the
computation, particularly when iterative methods are employed. To fulfill clinical performance constraints, graphics
cards (GPUs) were investigated and proved to be an efficient platform for accelerating tomographic reconstruction.
However, hardware programming constraints often led to complicated memory management and resulted in reduced
accuracy or compromised performance. In this paper we proposed a new GPU-based reconstruction framework targeting
on tomosynthesis applications. Our framework benefits from latest GPU functionalities and improves the design from
previous applications. A high-quality ray-driven forward projection help simplify the data flow when arbitrary
acquisition matrices are provided. Our results show that a near-interactive reconstruction speed is achieved with the new
framework at no loss of accuracy.
Using 2D-3D registration it is possible to extract the body transformation between the coordinate systems of
X-ray and volumetric CT images. Our initial motivation is the improvement of accuracy of external beam
radiation therapy, an effective method for treating cancer, where CT data play a central role in radiation
treatment planning. Rigid body transformation is used to compute the correct patient setup. The drawback
of such approaches is that the rigidity assumption on the imaged object is not valid for most of the patient
cases, mainly due to respiratory motion. In the present work, we address this limitation by proposing a flexible
framework for deformable 2D-3D registration consisting of a learning phase incorporating 4D CT data sets and
hardware accelerated free form DRR generation, 2D motion computation, and 2D-3D back projection.
Level set methods have become increasingly popular as a framework for image segmentation. Yet when used as
a generic segmentation tool, they suffer from an important drawback: Current formulations do not allow much
user interaction. Upon initialization, boundaries propagate to the final segmentation without the user being able
to guide or correct the segmentation. In the present work, we address this limitation by proposing a probabilistic
framework for image segmentation which integrates input intensity information and user interaction on equal
footings. The resulting algorithm determines the most likely segmentation given the input image and the user
input. In order to allow a user interaction in real-time during the segmentation, the algorithm is implemented
on a graphics card and in a narrow band formulation.