In the process of creating synthetic scenes for use in simulations/visualizations, texture is used as a surrogate to 'high' spatial definition. For example, if one were to measure the location of every blade of grass and all of the characteristics of each blade of grass in a lawn, then in the process of composing a scene of the lawn, it would be expected that the result would appear 'real;' however, because this process is excruciatingly laborious, various techniques have been devised to place the required details in the scene through the use of texturing. Experience gained during the recent Smart Weapons Operability Enhancement Joint Test and Evaluation (SWOE JT&E) has shown the need for higher fidelity texturing algorithms and a better parameterization of those that are in use. In this study, four aspects of the problem have been analyzed: texture extraction, texture insertion, texture metrics, and texture creation algorithms. The results of extracting real texture from an image, measuring it with a variety of metrics, and generating similar texture with three different algorithms is presented. These same metrics can be used to define clutter and to make comparisons between 'real' and synthetic (or artificial) scenes in an objective manner.