While interacting with a machine, humans will naturally formulate beliefs about the machine's behavior, and these beliefs will affect the interaction. Since humans and machines have imperfect information about each other and their environment, a natural model for their interaction is a game. Such games have been investigated from the perspective of economic game theory, and some results on discrete decision-making have been translated to the neuromechanical setting, but there is little work on continuous sensorimotor games that arise when humans interact in a dynamic closed loop with machines. We study these games both theoretically and experimentally, deriving predictive models for steady-state (i.e. equilibrium) and transient (i.e. learning) behaviors of humans interacting with other agents (humans and machines). Specifically, we consider experiments wherein agents are instructed to control a linear system so as to minimize a given quadratic cost functional, i.e. the agents play a Linear-Quadratic game. Using our recent results on gradient-based learning in continuous games, we derive predictions regarding steady-state and transient play. These predictions are compared with empirical observations of human sensorimotor learning using a teleoperation testbed.