This paper presents an architecture for the estimation of dynamic state, geometric shape, and inertial parameters of objects in orbit, using on-orbit cooperative 3-D vision sensors. This has application in many current and projected space missions, such as satellite capture and servicing, debris capture and mitigation, and large space structure assembly and maintenance. The method presented here consists of three distinct parts: (1) kinematic data fusion, which condenses sensory data into a coarse estimate of target pose; (2) Kalman filtering, which filters these coarse estimates and extracts the full dynamic state and inertial parameters of the target; and (3) shape estimation, which uses filtered pose information and the raw sensory data to build a probabilistic map of the target’s shape. This method does not rely on feature detection, optical flow, or model matching, and therefore is robust to the harsh sensing conditions of space. Instead, it exploits the well-modeled dynamics of objects in space through the Kalman filter. The architecture is computationally fast since only coarse measurements need to be provided to the Kalman filter. This paper will summarize the three steps of the architecture. Simulation results will follow showing the theoretical performance of the architecture.