The remarkable success of global methods for analyzing time-resolved spectroscopic data has led to their extension in recent years to larger, more complex and more difficult analyses. Because of the increased complexity and difficulty, it has become even more important to use least-squares statistics to judge the accuracy of parameter estimates and the goodness of fit of the model under consideration. But the increased size of analyses has resulted in computational times so large that a complete least-squares analysis is frequently infeasible. To resolve this difficulty and to provide an adequate computational base for the further growth of global methods, we have developed a comprehensive, efficient methodology, based on separation techniques, for global least-squares analysis of time-resolved data from time- and frequency-domain measurements. We have formulated these techniques so as to exploit the special form of fluorescence spectral models. In particular, we have been able to reduce the cubic rate of growth in the number of computations per data curve of previous methodologies to essentially a linear growth rate. For excited-state kinetics described by time-independent rates and for data that adequately determine the corresponding model, computational times suitable for interactive computing can thereby be obtained, regardless of the magnitude and complexity of the analysis. For example, we have simulated data for energy transfer between the two chromophores of the a subunit of phycoerythrin at 99 consecutive emission wavelengths, each with 500 channels. Parameter estimation required less than 0.4 seconds per iteration on an IBM 3081, a reduction of more than three orders of magnitude over previous methods. Computational storage requirements undergo a similar reduction. These techniques are equally applicable to error estimation. We have derived a separated form of the equations for standard deviations and algorithms for computing nonlinear joint confidence limits that are analogous to those for parameter estimation (requiring, therefore, essentially the same amount of time per iteration). Standard deviations and linear estimates of confidence limits are all computed in a single iteration. Two types of confidence limits, support-plane and principal axis, can be calculated. The latter allow the limits for strongly correlated parameters to be estimated simultaneously by means of more stable algorithms. If only linear joint confidence limits are computed for the emission intensities, which is standard practice in the literature, then a complete analysis of the above system, which includes parameter and error estimates for the two decay rates, the energy transfer rate and approximately 200 emission intensities, can be obtained in less than 10 seconds. Equations are still expressed in normal form, which is simpler and more familiar to most users than singular-value decompositions and allows a wider range of algorithms to be employed in their solution. In particular, this approach facilitates the use of a variety of algorithms of higher order (i.e., with stronger convergence properties) than those customarily available. We have found their use in more difficult analyses to be crucial for achieving rapid convergence to parameter estimates.