Over the last 18 months, our team at the Laboratory for Computational Science & Engineering (LCSE) at the University of Minnesota has been moving our data analysis and visualization applications from small clusters of PCs within our lab to a Grid-based approach using multiple PC clusters with dynamically varying availability. Under support from an NSF CISE Research Resources grant, we have outfitted 52 Dell PCs in a student lab in our building that is operated by the University's Academic and Distributed Computing Services (ADCS) organization. This PC cluster supplements another PC cluster of 10 machines in our lab. As the students gradually leave this ADCS lab after 10 PM, the PCs are rebooted into an operating system image that sees the 400 GB disk subsystems we have installed on them and communicates with a central, 32-processor Unisys ES-7000 machine in our lab. The ES-7000 hosts databases that coordinate the work of these 52 PCs along with that of 10 additional Dell PCs in our lab that drive our PowerWall display. This equipment forms a local Grid that we coordinate to analyze and visualize data generated on remote clusters at NCSA. The PCs of the student lab offer a 20 TB pool of disk storage for our simulation data as well as a large movie rendering capability with their Nvidia graphics engines. However, these machines do not become available to us in force until after about 1 AM. This fact has forced us to automate our visualization process to an unusual degree. It has also forced us to address problems of security and run error diagnosis that we could easily avoid in a more standard environment. In this paper we report our methods of addressing these challenges and describe the software tools that we have developed and made available for this purpose on our Web site, www.lcse.umn.edu. We also report our experience in using this system to visualize 1.4 TB of vorticity volumetric data from a recent simulation of homogeneous, compressible turbulence with our PPM code. This code was run on the NSF TeraGrid cluster at NCSA, and the data was transported to our lab at 8 MB/sec over the Internet using the same ipRIO software that we developed to move this data around within our own environment. The move visualizations generated on the ADCS PCs overnight are viewed on the LCSE 10-panel, 13 Mpixel PowerWall the next day. Smaller trial movies can be generated on the small PC cluster in our lab before submitting an overnight large movie request.