Determining the self-motion of a camera is useful for many applications. A number of visual motion-tracking
algorithms have been developed till date, each with their own advantages and restrictions. Some of them have
also made their foray into the mobile world, powering augmented reality-based applications on phones with inbuilt
cameras. In this paper, we compare the performances of three feature or landmark-guided motion tracking
algorithms, namely marker-based tracking with MXRToolkit, face tracking based on CamShift, and MonoSLAM.
We analyze and compare the complexity, accuracy, sensitivity, robustness and restrictions of each of the above
methods. Our performance tests are conducted over two stages: The first stage of testing uses video sequences
created with simulated camera movements along the six degrees of freedom in order to compare accuracy in
tracking, while the second stage analyzes the robustness of the algorithms by testing for manipulative factors
like image scaling and frame-skipping.
The small form factor and unergonomic keys of mobile phones call for new and more natural approaches in
user interface (UI) design. In this paper, we propose intuitive motion-based UI controls for mobile devices with
built-in cameras based on the visual detection of the device's self-motion. We developed a car-racing game to
test our new interface, and we conducted a user study to evaluate the accuracy, sensitivity, responsiveness and
usability of our proposed system. Results show that our motion-based interface is well received by the users and
clearly preferred over traditional button-based controls.