In recent years, real-time Magnetic Resonance Imaging (RT-MRI) has been used to acquire vocal tract data to support articulatory studies. The large amount of images resulting from these acquisitions needs to be processed and the resulting data analysed to extract articulatory features. This analysis is often performed by linguists and phoneticists and requires not only tools providing a high level exploration of the data, to gather insight over the different aspects of speech, but also a set of features to compare different vocal tract configurations in static and dynamic scenarios. In order to make the data available in a faster and systematic fashion, without the continuous direct involvement of image processing specialists, a framework is being developed to bridge the gap between the more technical aspects of raw data and the higher level analysis required by speech researchers. In its current state it already includes segmentation of the vocal tract, allows users to explore the different aspects of the acquired data using coordinated views, and provides support for vocal tract configuration comparison. Beyond the traditional method of visual comparison of vocal tract profiles, a quantitative method is proposed, considering relevant anatomical features, supported by an abstract representation of the data both for static and dynamic analysis.