The human visual system automatically segments some regions based on luminance gradients. Other regions are segmented based on texture gradients, without segmenting the internal constituent regions whose boundary luminance gradients created the texture appearance. When visual attention is directed to a location and size scale or shape within the texture, the internal constituent regions can be picked out, but then the perception of texture is lost. This suggests that texture segregation and luminance segregation occur parallel, but not at the same location at the same time. This paper presents a possible mechanization of the process of determining when and where spatial modulation is perceived as texture versus being perceived as region boundaries. It begins with multi- resolution spatial band-pass filtering, patterned after recent computational vision modeling theory. The algorithm examines the spatial distribution of zero-crossings, i.e., phase information, on each band-pass channel. Wide regions in which zero-crossings are dense are perceived as textures. Regions in which the zero-crossings can be enclosed in narrow, lineal bands are perceived as luminance gradient boundaries. The algorithm produces maps delineating regions perceived as texture, and regions perceived as luminance for each spatial band-pass channel, i.e., at multi-resolution scales. A second algorithm recombines the band-pass channel output with the maps to produce two images: one containing the texture and one containing the luminance gradients. In addition to providing insight into possible mechanisms of visual perception, the algorithm has potential application as an image segmentation pre-processor. The concept is to apply a texture segmentation algorithm to the texture image, apply a luminance segmentation algorithm to the luminance gradient image, then combine the results.