Despite its superior soft tissue contrast, conventional MRI is qualitative in nature and this presents a bottleneck problem in quantitative image analysis and data-driven medicine. Various investigations have been devoted to overcoming this limitation, but practical solutions remain elusive. Leveraging from the unique ability of emerging deep learning in feature extraction, we investigate a data-driven strategy of separating contributions of various contributing factors intertwined in a single <i>T</i><sub>1</sub> weighted image to derive quantitative <i>T</i><sub>1</sub> and ρ maps without any additional image acquisition. Furthermore, in the proposed deep learning framework, compensation for radiofrequency field inhomogeneities is automatically achieved without extra measurement of <i>B</i><sub>1</sub> map. The tasks are accomplished using self-attention deep convolutional neural networks, which make efficient use of both local and non-local information. The premise of the approach is that qualitative and quantitative MRI, named Q<sup>2</sup>MRI, can be attained simultaneously without changing the existing imaging protocol. Q<sup>2</sup>MRI lays foundation for next generation of digital medicine and provides a promising quantitative imaging tool for a wide spectrum of biomedical applications, ranging from disease diagnosis, treatment planning, prognosis to assessment of therapeutic response.