Face hallucination in a single modality environment has been heavily studied, in real-world environments under multiple modalities is still in its early stage. This paper presents a unified framework to solve face hallucination problem across multiple modalities i.e. different expressions, poses, illuminations. Almost all of the state-of-the-art face superresolution methods only generate a single output with the same modality of the low-resolution input. Our proposed framework is able to generate multiple outputs of different new modalities from only a single low-resolution input. It includes a global transformation with diagonal loading for modeling the mappings among different new facial modalities, and a local position-patch based method with weights compensation for incorporating image details. Experimental results illustrate the superiority of our framework.