With the new planned Sentinel missions, the availability of Earth Observation data is increasing everyday offering a larger number of applications that can be created using these data. Currently, three of the five missions were launched and they are delivering a wealth of data and imagery of the Earth's surface as, for example, the Sentinel-1 carries an advanced radar instrument to provide an all-weather, day-and-night supply of Earth imagery. The second mission, the Sentinel-2, carries an optical instrument payload that will sample 13 spectral bands at different resolutions. Even though, we count on tools for automated loading and visual exploration of the Sentinel data, we still face the problem of extracting relevant structures from the images, finding similar patterns in a scene, exploiting the data, and creating final user applications based on these processed data. In this paper, we present our approach for processing radar and multi-spectral Sentinel data. Our approach is mainly composed of three steps: 1) the generation of a data model that explains the information contained in a Sentinel product. The model is formed by primitive descriptors and metadata entries, 2) the storage of this model in a database system, 3) the semantic definition of the image content based on machine learning algorithms and relevance feedback methods.