Image features are regularly used in computer and machine vision applications to solve problems pertaining to image geometry. The effectiveness of techniques used rely on how accurately corresponding features are located in two (or more) images. Current techniques define a feature as a variation in intensity within a pixel neighborhood. The greater the change, the more pronounced a feature becomes, and is thus more accurately located. The remaining uncertainty is characterized via a covariance matrix, which is used to weight the influence of each feature on model estimation. As our first contribution, we propose a novel technique to decrease feature location uncertainty. Considering uncertainty information from all three channels of an RGB image, we employ covariance intersection (CI) to reduce measurement errors leading to a decrease in homography (H) estimation error. Our second contribution is to use an L∞ filter capable of dealing with system uncertainties to estimate H. By feeding the feature location uncertainty information into our filter, we observe significantly improved estimates of the homography. Our filtering approach outperforms covariance weighted optimization techniques proposed in literature shown through a number of examples.