The initial development of two First-Person Perspective Video Activity Recognition Systems is discussed. The first system, the First Person Fall Detection or UFall, can be used to recognize when a person wearing or holding the mobile vision system has fallen. The problem of fall detection is tackled from the unique first-person perspective. The second system, the directed CrossWalk System (UCross), involves detection of the user movement across a crosswalk and is intended for use in helping a low vision person navigate. In both cases, the user is wearing or holding the camera device for purposes of monitoring or inspection of the environment. This first-person perspective yields unusual fall data and this is captured and used for the creation of a fall detection system. For both systems Machine Learning is employed using video input to trained Long-Term Short-Term (LSTM) Networks. These first-perspective video activity recognition systems use the Tensorflow framework  and is deployed using mobile phones for proof of concept. These applications could be useful for low vision people and in the case of fall detection for senior citizens, police, construction and other inspection-oriented jobs to help users who have fallen. The success and challenges faced with this unique first-person perspective data are presented along with future avenues of work.