Fast recursive least squares (FRLS) algorithms have been extensively studied since the mid- 1970s for adaptive signal processing applications. Despite their large number and apparent diversity, they were almost exclusively derived using only two techniques: partitioned matrix inversion lemma or least squares geometric theory. Surprisingly, Chandrasekhar factorizations, that were introduced in the early 1970s to derive fast Kalman filters, were little used, even though fast RLS algorithms can also be derived with this technique, under various forms, either unnormalized or over-normalized. For instance, the well-known FTF algorithm corresponds exactly to a particular case of the Chandrasekhar equations. The aim of this paper is to take stock of the interest of the Chandrasekhar technique for FRLS estimation. The corresponding equations have a somewhat generic character which can help to show the links between FRLS algorithms and other least squares estimation problems, since they were successfully used to derive fast algorithms for estimating random variables through regularization techniques, or for computing cross-validation criteria in statistics. These Chandrasekhar factorizations can also help teach fast adaptive algorithms: they are easy to understand, they can be used in a large variety of algorithmic problems, and, in a least squares algorithmic context, there is no need to learn the FRLS algorithms separately.