Super-resolution enhancement algorithms are used to estimate a high-resolution video still (HRVS) from several low- resolution frames, provided that objects within the image sequence move with subpixel increments. However, estimating an accurate subpixel-resolution motion field between two low-resolution, noisy video frames has proven to be a formidable challenge. Up-sampling the image sequence frames followed by the application of block matching, optical flow estimation, or Bayesian motion estimation results in relatively poor subpixel-resolution motion fields, and consequently inaccurate regions within the super-resolution enhanced video still. This is particularly true for large interpolation factors (greater than or equal to 4). To improve the quality of the subpixel motion fields and the corresponding HRVS, motion can be estimated for each object within a segmented image sequence. First, a reference video frame is segmented into its constituent objects, and a mask is generated for each object which describes its spatial location. As described previously, subpixel-resolution motion estimation is then conducted by video frame up- sampling followed by the application of a motion estimation algorithm. Finally, the motion vectors are averaged over the region of each mask by applying an (alpha) -trimmed mean filter to the horizontal and vertical components separately. Since each object moves as a single entity, averaging eliminates many of the motion estimation errors and results in much more consistent subpixel motion fields. A substantial improvement is also visible within particular regions of the HRVS estimates. Subpixel-resolution motion fields and HRVS estimates are computed for interpolation factors of 2, 4, 8, and 16, to examine the benefits of object segmentation and motion field averaging.