Buried threat detection system (e.g., GPR, FLIR, EMI) performance can be summarized through two related statistics: the probability of detection (PD), and the false alarm rate (FAR). These statistics impact system rate of forward advance, clearance probability, and the overall usefulness of the system. Understanding system PD and FAR for each target type of interest is fundamental to making informed decisions regarding system procurement and deployment. Since PD and FAR cannot be measured directly, proper experimental design is required to ensure that estimates of PD and FAR are accurate. Given an unlimited number of target emplacements, estimating PD is straightforward. However in realistic scenarios with constrained budgets, limited experimental collection time and space, and limited number of targets, estimating PD becomes significantly more complicated. For example, it may be less expensive to collect data over the same exact target emplacement multiple times than to collect once over multiple unique target emplacements. Clearly there is a difference between the quantity and value of the information obtained from these two experiments (one collection over multiple objects, and multiple collections over one particular object). This work will clarify and quantify the amount of information gained from multiple data collections over one target compared to collecting over multiple unique target burials. Results provide a closed-form solution to estimating the relative value of collecting multiple times over one object, or emplacing a new object, and how to optimize experimental design to achieve stated goals and simultaneously minimize cost.