In the construction of three-dimensional (3D) point clouds from multi-view aerial imagery, voids in the point cloud often exist where multiple views of the area were not obtained during collection. A method is presented for identifying these voids. In this work, point clouds are derived from oblique aerial imagery using multi-view techniques from the photogrammetry and computer vision communities. A voxel-based approach is used to partition the 3D space and each voxel is classified as containing or not containing derived points. Using the imagery and the position of the camera, it is possible to analyze what the cameras can and cannot see, thereby making it possible to label the voxels as occupied, free, and non-classified spaces. Voids in the data will manifest themselves in the non-classified voxels. This method has been tested on high-frame-rate oblique aerial imagery captured over Rochester, NY as well as synthetic data sets. Also presented is a unique synthetic dataset for 3D reconstruction. The data set, created with the Rochester Institute of Technology's Digital Imaging and Remote Sensing Image Generation (DIRSIG) software, provides high-fidelity radiometric data in addition to known 3D locations and surface normals for each pixel location in an image scene. This data set is available to the community for use in their related research.