Analyzing massive amounts of data gathered during many high energy physics experiments, including but not limited to the LHC ALICE detector experiment, requires efficient and intuitive methods of visualisation. One of the possible approaches to that problem is stereoscopic 3D data visualisation. In this paper, we propose several methods that provide high quality data visualisation and we explain how those methods can be applied in virtual reality headsets. The outcome of this work is easily applicable to many real-life applications needed in high energy physics and can be seen as a first step towards using fully immersive virtual reality technologies within the frames of the ALICE experiment.
Visual data 3D reconstruction is an important challenge in the LHC ALICE detector experiment. Visualization of 3D data is also an important subject of photonics in general. In this paper we have proposed several solutions enabling effective perception and location based visualization of data provided by detectors in high energy physics experiments.
Normal mapping is a powerful technique for simulation of surface roughness by means of normal maps. The high polygon count model is represented by coarse polygon mesh with fine details stored in the normal map. Thus, the technique greatly reduces geometric complexity of models and shifts the demands on effective normal map compression algorithms. In this paper we present normal compression algorithm which extends the commonly used 3Dc algorithm introduced by ATI with wavelet compression based on Haar basis. Each block component is coded by one of two modes and the one which introduce the smallest error is chosen for block component representation. This allows for better adaptation to normal map data and improves the peak signal to noise ratio as compared to standalone 3Dc.
In this paper we present an enhanced real-time selective antialiasing solution. We propose to use a directional filtering technique as an antialiasing tool. The best post-processing antialiasing effect will be obtained if we apply the low-pass filter along local orientation of antialiased features. Previously authors proposed a complicated curve fitting method as a solution for the local feature antialiasing. Here we propose a more simple and efficient solution. Instead of using a curve fitting method based on second order intensity derivatives, we can use directly a set of first order derivatives applied on the z-buffer content. For each feature direction detected an appropriate directional Gaussian convolution
filter can be applied. This way the lowpass filter is applied along local features selected for antialiasing, filtering out high frequency distortions due to intermodulation. In this approach the high-pass convolution filtering applied on the zbuffer has a twofold application: it selects the objects edges that need to be antialiased and it gives a local feature direction allowing for edge reconstruction. The advantage of the approach proposed here is that it preserves texture details. Textures usually are filtered independently using trilinear or anisotropic filtering, which with traditional antialiasing techniques leads to overblurring.
Antialiasing is still a challenge in real-time computer graphics. In this paper we present a real-time selective antialiasing solution that builds upon our experience. We investigated existing approaches to real-time antialiasing and finally found a new simpler solution. Our new idea is to use the z-buffer directly for extracting visible edges information. The solution presented here can be summarized as follows: 1) select objects edges by applying on the z-buffer spatial convolution with Laplacian; 2) filter out aliasing artifacts by applying lowpass spatial convolution filtering to selected pixels. In this approach the same circuit architecture can be used for selection and the antialiasing of selected pixels. The major advantage of using spatial convolution in the context of antialiasing is hat general purpose hardware real-time convolution filters are well known and available. Method presented here can be used to improve image quality in graphics accelerators but also applications such as real-time ray tracing.
Simulation of special effects such as: defocus effect, depth-of-field effect, raindrops or water film falling on the windshield, may be very useful in visual simulators and in all computer graphics applications that need realistic images of outdoor scenery. Those effects are especially important in rendering poor visibility conditions in flight and driving simulators, but can also be applied, for example, in composing computer graphics and video sequences- -i.e. in Augmented Reality systems. This paper proposes a new approach to the rendering of those optical effects by iterative adaptive filtering using spatial convolution. The advantage of this solution is that the adaptive convolution can be done in real-time by existing hardware. Optical effects mentioned above can be introduced into the image computed using conventional camera model by applying to the intensity of each pixel the convolution filter having an appropriate point spread function. The algorithms described in this paper can be easily implemented int the visualization pipeline--the final effect may be obtained by iterative filtering using a single hardware convolution filter or with the pipeline composed of identical 3 X 3 filters placed as the stages of this pipeline. Another advantage of the proposed solution is that the extension based on proposed algorithm can be added to the existing rendering systems as a final stage of the visualization pipeline.
This paper proposes a new algorithm for tracking objects and objects boundaries. This algorithm was developed and applied in a system used for compositing computer generated images and real world video sequences, but can be applied in general in all tracking systems where accuracy and high processing speed are required. The algorithm is based on analysis of histograms obtained by summing along chosen axles pixels of edge segmented images. Edge segmentation is done by spatial convolution using gradient operator. The advantage of such an approach is that it can be performed in real-time using available on the market hardware convolution filters. After edge extraction and histograms computation, respective positions of maximums in edge intensity histograms, in current and previous frame, are compared and matched. Obtained this way information about displacement of histograms maximums, can be directly converted into information about changes of target boundaries positions along chosen axles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.