Robotic systems deployed in space must exhibit flexibility. In particular, an intelligent robotic agent should not have to be reprogrammed for each of the various tasks it may face during the course of its lifetime. However, pre-programming knowledge for all of the possible tasks that may be needed is extremely difficult. Therefore, a powerful notion is that of an instructible agent, one which is able to receive task-level instructions and advice from a human advisor. An agent must do more than simply memorize the instructions it is given (this would amount to programming). Rather, after mapping instructions into task constructs that it can reason with, it must determine each instruction's proper scope of applicability. In this paper, we examine the characteristics of instruction, and the characteristics of agents, that affect learning from instruction. We find that in addition to a myriad of linguistic concerns, both the situatedness of the instructions (their placement within the ongoing execution of tasks) and the prior domain knowledge of the agent have an impact on what can be learned.