This paper describes an initial attempt to calibrate a large, random, sparse, high-frequency, 2-dimensional array using the transmissions from radio stations. A semi-quantitative discussion is presented of various intuitive ideas for calibration, along with samples of typical results from numerical testing using synthetic and real data. First, a theoretical discussion of the effect of calibration errors is provided, in which a distinction is made between mild and severe calibration errors. Then a variety of techniques are suggested for both cases. For mild calibration errors, in which true peaks are still apparent in the spectrum, a simple approach is presented where the information in the true peaks is used to approximate the field at the array, from which the correct calibration can be deduced. This technique will converge to the correct solution with sufficient independent data sets. For severe calibration errors, in which the spectrum contains only speckle, several techniques are proposed to obtain a crude calibration of the array. One technique fits a plane wave to the uncalibrated receiver voltages. Another technique forces or assumes a plane wave at the array and then deduces the error by comparing different data sets. The third technique uses a Monte Carlo approach to generate the calibration weights, and a discussion of the correct interpretation of the results is provided. If this crude initial calibration can reduce the calibration errors to the mild case, then the calibration can continue in a two-step procedure using the techniques for the mild case.