Availability of different imaging modalities requires techniques to process and combine information form different images of the same phenomena. We present a symmetry based approach for combining information from multiple images. Fusion is performed at data level. Actual object boundaries and shape descriptors are recovered directly from raw sensor output(s). Method is applicable to arbitrary number of images in arbitrary dimension.
Sibel Z. Tari, Sibel Z. Tari,
"Data-level fusion using common symmetry set", Proc. SPIE 3719, Sensor Fusion: Architectures, Algorithms, and Applications III, (12 March 1999); doi: 10.1117/12.341354; https://doi.org/10.1117/12.341354