One promise of telerobotics is the ability to interact in environments that are distant (e.g., deep sea or deep space), dangerous (e.g., nuclear, chemical, or biological environments), or inaccessible by humans for political or legal reasons. A key component to such interactions are sophisticated human-computer interfaces that can replicate sufficient information about a local environment to permit remote navigation and manipulation. This environment replication can, in part, be provided by technologies such as virtual reality. In addition, however, telerobotic interfaces may need to enhance human-machine interaction to assist users in task performance, for example, governing motion or manipulation controls to avoid obstacles or to restrict interaction with certain objects (e.g., avoiding contact with a live mine or a deep sea treasure). Thus, effective interactions within remote environments require intelligent virtual interfaces to telerobotic devices. In part to address this problem, MITRE is investigating virtual reality architectures that will enable enhanced interaction within virtual environments. Key components to intelligent virtual interfaces include spoken language processing, gesture recognition algorithms, and more generally, task recognition. In addition, these interfaces will eventually have to take into account properties of the user, the task, and discourse context to be more adaptive to the current situation at hand. While our work has not yet investigated the connection of virtual interfaces to external robotic devices, we have begun developing the key components for intelligent virtual interfaces for information and training systems.