This paper proposes a novel combined optical-electronic simulation in an indoor environment consisting of four luminaires with tunable LEDs of different Correlated Color Temperatures (CCT). This paper investigates the ability to perform Visible Light Positioning (VLP) to identify the receiver positions in such a scenario with tunable LEDs. In this regard, the ray-tracing simulation, generating a list of rays consisting of optical power, CCT, and the corresponding wavelength of each ray, impinging on the receiver's surface, is combined with the simulation of an electronic receiver with wavelength depending sensitivity in Simulink/Simscape. This configuration allows us to evaluate the impact of tunable CCT on the electronic design, especially regarding optimizing certain parameters. In this work, we show how the number of unique values in an offline-fingerprinting map can be optimized, which is a crucial requirement for indoor positioning utilizing the fingerprinting method. With our outlined solution approach, a system-level tool is formed based on a precise and comprehensive optical-electronic simulation that allows for assessing VLP scenarios.
Human activity recognition (HAR) gained great interest in today’s research activities, especially in regard to demographic change. Especially when complex activities have to be recognized, HAR systems often rely on multiple sensors necessary to be worn by the user. In this work, we propose a novel approach for combining a segmented optical receiver with a single IMU device. By fusing IMU related real world experimental data with precise optical simulations of a segmented optical receiver we can not only determine the activity of the user, including complex movements like walk-up and walk-down, but can also determine the user´s location.
In this work, we investigate a novel angle diversity receiver concept for visible light positioning. The receiver concept, consisting an ultrathin Fresnel lens, embedded in an aperture, mounted on top of a CMOS sensor has been tested and optimized by ray-tracing simulations. This angle-dependent receiver system has the advantage of compact dimensions, a high field-of-view, an off-the-shelf-sensor and relatively high amount of collected light. The origination of the previously calculated Fresnel lens structure is performed by means of grayscale laser lithography. In the presented receiver system, the incoming radiant intensity distribution is converted into an irradiance distribution on the CMOS sensor, where different angles of incidence of incoming light are refracted towards different areas on the CMOS sensor. To verify the optical system experimentally, a prototype of the receiver is placed in a goniometer setup to record images under controlled angles of incidence. Irradiance distributions recorded in the experiment are compared to irradiance distributions obtained by a realistic ray-tracing model. By direct comparison between experiment and simulation, we can verify the optical functionality of the developed optical system of the receiver and investigate the effect of manufacturing imperfections.
Fingerprinting-based Visible Light Positioning is a promising candidate to perform large-scale indoor positioning tasks. In fingerprinting, signal characteristics are grouped in a fingerprint map together with the respective locations inside the indoor environment. By comparing live signal measurements with the fingerprint map, the closest match is selected as the current position estimate. However, the fingerprint map has to be generated beforehand in the so-called offline phase, which is the time-consuming process of sampling the environment, in which the positioning task is desired, for signal characteristics. Here, we propose a fingerprint-based positioning approach for which we mitigate the need for the offline phase by taking advantage of the VLC data transmission capabilities of the LED luminaires of the obligatory room lighting. Based on the transmitted data on room and luminaire configurations to the receiving device, the illumination characteristics in the room can be calculated by simplified analytical formalisms, substituting the need for an experimentally measured offline phase. We demonstrate the effectiveness of our approach with the help of ray-tracing simulations and under the assumption that the receiving device is equipped with an angular sectored receiver. The results of the ray-tracing simulations mimic real world measurements with the receiver in the online phase. We show that decimeter level accuracies down to centimeter level accuracies are achievable for such an approach.
KEYWORDS: Visible radiation, Receivers, Machine learning, Light sources and illumination, Transmitters, Sensors, Received signal strength, RGB color model, Photodiodes
Achieving precise information on the position of a subject without changing the luminaire infrastructure is a big challenge in positioning approaches that rely on visible light positioning. Achieving high positioning accuracy on a centimeter scale is done by implementing complex receiver unit designs or adapting the existing luminaries. In this context, we suggest a visible light positioning based approach that can determine the position of a person in certain areas of a room without the need of lighting infrastructure modifications. With this approach, one can identify the position with the help of the existing luminaires for the obligatory room lighting. The receiver, represented by a RGB sensitive photodiode, is positioned in an optimized way in order to support both the positioning task as well as the comfort of the user. Based on received signal strength measurements in the red, green and blue channels, we achieve the positioning task by a segmentation of the room into different areas corresponding to the respective impinging light and by utilizing machine learning clustering. Our results show the influence of different segmentation strategies and parameters on the number and size of the distinguishable areas inside the room. Then, we demonstrate the achievable accuracy of our solution approach in real world experiments. Our results show that such light-based positioning data can be fused with IMU sensor data for recognizing human activity.
Recently, indoor activity monitoring of human beings has gained more and more relevance. In particular, the determination of the spatial and temporal context of a user is of utter importance in many applications like monitoring or safety. In this paper, we present a framework that can identify what, where and how long a user is performing a certain activity by the utilization of a low cost and low complex system. Our system only comprises of a single inertial measurement unit and a single RGB sensitive photodiode, with no prerequisite for infrastructural modifications. By using independent decision trees, also the training effort can be kept minimal. Additionally, we verify experimentally the optimal set of features to be used for the framework. Overall, the achieved results are above 90 % in correct determinations of the room the user is in, the activity the user is performing and in which direction the activity is undertaken.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.