In most clinical MR images, global patient motion is predominantly interview (between readouts), and corrupts the phase of the received signal for that line of k-space. No information, however, is lost--if the original motion is known and appropriate phase corrections applied, the image can be perfectly restored. It is therefore possible to correct for motion given only the real and imaginary data from the scanner by simply trying different possible motion corrections and searching for the highest quality resulting image with a suitable evaluation function. Such an `autofocusing' algorithm was recently described, using image entropy as the cost function; however, very long computation times are required. If the corrupting motion is primarily 1D, much faster autofocusing might be possible by calculating only selected lines of the image. In this paper, we describe work on such an algorithm, implemented with both minimum entropy and maximum variance as the cost functions. Tests on several 256 X 256 magnitude images artificially corrupted by 1D motion indicate that evaluating only eight selected columns of the image (calculated with eight 1D FFT's) works very well--essentially as well as evaluating the whole image, which requires 2D FFT's. The run time dropped from several hours for 2D FFT's to less than ten minutes using 1D FFT's. One test image with little dark area was not well corrected, indicating the possible dependence of both cost functions on dark regions to be cleared of artifacts.