GPGPUs and Multicore processors have become commonplace with their wide usage in traditional high performance computing systems as well as mobile computing devices. A significant speedup can be achieved for a variety of general-purpose applications by using these technologies. Unfortunately, this speedup is often accompanied by high power and/or energy consumption. As a result, energy conservation is increasingly becoming a major concern in designing these computing devices. For large-scale systems such as massive data centers, the cost and environmental impact of powering and cooling computer systems is the main driver for energy-efficiency. On the other hand, for the mobile computing sector, energy conservation is driven by the need to extend battery life and power capping is mandated by the restrictive power budget of mobile platforms such as Unmanned Aerial Vehicles (UAV). Our focus is to understand the power performance tradeoffs in executing Army applications on portable or tactical computing platforms. For a GPGPU computing platform, this study investigates how host processors (CPUs) with different Thermal Design Power (TDP) might affect the execution time and the power consumption of an Army-relevant stereo-matching code accelerated by a GPGPU. For image pairs with size approximately one Megapixel we observed a decrease in execution time of nearly 50% and a decrease in average power by 5% when executed on a low TDP Intel Xeon processor host. The decrease in energy consumption was over 50%. For a larger image pair, although there was no substantial decrease in execution time, there was a decrease in power and energy consumption of approximately 6%. Although we cannot make general conclusions based on a case study, it points to the possibility that for some tactical-HPC GPGPU-accelerated applications, a host processor with a lower TDP might provide better system performance in terms of power consumption while not degrading the execution-time performance.