LiDAR and hyperspectral data provide rich and complementary information about the content of a scene. In this work, we examine methods of data fusion, with the goal of minimizing information loss due to point-cloud rasterization and spatial-spectral resampling. Two approaches are investigated and compared: 1) a point-cloud approach in which spectral indices such as Normalized Difference Vegetation Index (NDVI) and principal components of the hyperspectral image are calculated and appended as attributes to each LiDAR point falling within the spatial extent of the pixel, and a supervised machine learning approach is used to classify the resulting fused point cloud; and 2) a raster-based approach in which LiDAR raster products (DEMs, DSMs, slope, height, aspect, etc) are created and appended to the hyperspectral image cube, and traditional spectral classification techniques are then used to classify the fused image cube. The methods are compared in terms of classification accuracy. LiDAR data and associated orthophotos of the NPS campus collected during 2012 - 2014 and hyperspectral Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data collected during 2011 are used for this work.