In this work, we present a novel method to combine mutually exclusive CT image properties that emerge from different reconstruction kernels and display settings into a single organ-specific image reconstruction and display. We propose a context-sensitive reconstruction that locally emphasizes desired image properties by exploiting prior anatomical knowledge. Furthermore, we introduce an organ-specific windowing and display method that aims at providing a superior image visualization. Using a coarse-to-fine hierarchical 3D fully convolutional network (3D U-Net), the CT data set is segmented and classified into different organs, e.g. the heart, vasculature, liver, kidney, spleen and lung, as well as into the tissue types bone, fat, soft tissue and vessels. Reconstruction and display parameters most suitable for the organ, tissue type, and clinical indication are chosen automatically from a predefined set of reconstruction parameters on a per-voxel basis. The approach is evaluated using patient data acquired with a dual source CT system. The final context-sensitive images simultaneously link the indication-specific advantages of different parameter settings and result in images joining tissue-related desired image properties. A comparison with conventionally reconstructed and displayed images reveals an improved spatial resolution in highly attenuating objects and air while maintaining a low noise level in soft tissue in the compound image. The images present significantly more information to the reader simultaneously and dealing with multiple volumes may no longer be necessary. The presented method is useful for the clinical workflow and bears the potential to increase the rate of incidental findings.