Pixel level image processing algorithms have to work with noisy sensor data to extract spatial features. This often required the use of operators which amplify high frequency noise. One method of dealing with this problem is to perform image smoothing prior to any use of spatial differentiation. Such spatial smoothing results in the spread of object characteristics beyond the object boundaries. Identification of discontinuities and explicit use of these as boundaries for smoothing has been proposed as a technique to overcome this problem. This approach has been used to perform cooperative computations between multiple descriptions of the scene, e.g., fusion of edge and motion field for a given scene. This approach is extended to multisensor systems. The discontinuities detected in the output of one sensor are used to define regions of smoothing for a second sensor. For example, the depth discontinuities present in laser radar can be used to define smoothing boundaries for infrared focal plane arrays. The authors have recently developed a CMOS chip (28 X 36) which performs this task in real time. This chip consists of a resistive network and elements that can be switched ON or OFF, by loading a suitable bit pattern. The bit pattern for the control of switches can be generated from the discontinuities found in the output of sensor #1. The output of sensor #2 is applied to the resistive network for data smoothing. If all the switches are held in conducting state, this chip performs the usual data smoothing. However, if switches along object boundaries are turned OFF, a region for bounded smoothing is created. In this chip, information from a third sensor data (e.g., intensity data from laser radar) can be incorporated in the form of a map of 'confidence in data.' The results obtained with this chip using synthetic data and other potential applications of this chip are described.