This paper documents an analytical effort that looks at the expectation of being able to resolve individual members of a cluster of objects as a function of the parameters of time after deployment, the number and distribution of objects in the cluster, their relative separation velocities, sensor resolution capability, and the shape of the cluster -- essentially local object density. Multiple methods of modeling object clusters were investigated and found to be equivalent in their results. A simple set of equations has been derived that fits modeled data over a wide range of the parameter variations. For N objects in a cluster of average density equals d objects per resolution cell; R approximately equals N (DOT) e-d equals the expected number of objects resolved, and P approximately equals (N - R)/d equals the expected number of subclusters perceived. For uniform cluster densities, d is inversely proportional to time squared, and a method is shown for calculating d for non-uniform cluster densities. In addition, an approximately constant relationship between the number of objects perceived and the number of resolved objects is shown; R approximately equals P2/N. Several applications of these relationships which are of interest to the Strategic Defense Initiative (SDI) are examined, including the `Cheshire Cat Effect' wherein the number of perceived objects as a function of resolution and sensor sensitivity is discussed. In addition, system level implications of the effects of target density during boost phase and during the cluster tracking phase of mid-course are covered. The behavior of large numbers of clusters in a threat tube is examined and characterized as the individual clusters overlap each other as they expand and form a `supercluster.' An equilibrium limit of resolution possible within a `supercluster' is shown.