Skip to content

Overview

Background and Motivation

Humans have long observed environmental changes by exploiting characteristic variations of visible light propagating through space. For example, when visible light passes through air containing microscopic water droplets, dispersion occurs due to refraction; based on this phenomenon, ancient people developed weather-prediction methods using atmospheric optical phenomena such as rainbows and halos around the sun or moon.

Advances in modern technology have made visible-light signals ubiquitous in human production and daily life environments. Ceiling lights continuously operating day and night in modern buildings, traffic lights on roads, and vehicle headlights all provide convenient illumination sources for visible-light-based sensing. Meanwhile, photodiodes, CMOS (Complementary Metal-Oxide Semiconductor) sensors, and CCD (Charge-Coupled Device) sensors have enabled a qualitative leap in humanity’s ability to receive and process visible-light signals.

This chapter introduces how visible-light signals can be used to sense devices and environments—complementing radio-frequency (RF) signal-based sensing coverage—and fully leveraging the widespread presence of visible-light devices in IoT application scenarios. This approach expands the scope of wireless sensing applications and provides richer sensing services. Visible-light-based sensing directly utilizes readily available visible-light signals and hardware, eliminating the need to deploy specialized equipment or perform complex modifications to existing infrastructure.

State of the Art

Visible-light signals are generated by light-emitting devices such as LEDs and fluorescent lamps, propagate through a medium, and are received by photodiodes, cameras, and other optical receivers. A large body of research exploits features of visible-light signals captured by photodiodes or cameras—combined with knowledge of light-source emission characteristics or spatial positions—to achieve localization of receiving devices and identification of light-source identities.

Device Localization

The rectilinear propagation property of visible light naturally endows visible-light signals with directional information. Numerous studies have explored visible-light-based device localization. Based on underlying principles, these approaches fall into two categories: (1) localization leveraging spatial distribution characteristics of visible light, and (2) localization leveraging camera imaging geometry.

(1) Localization Based on Spatial Distribution Characteristics of Visible Light

In ambient environments, visible-light signals exhibit varying intensities, colors, and other features across different spatial locations. However, such features lack strong, direct correlations with position, making them unsuitable for direct localization. To establish a mapping between visible-light signal features and spatial positions, researchers have designed specialized light sources that project visible-light signals with deliberately engineered spatial distributions of intensity, color, polarization, or flickering frequency. By comparing the measured signal features at the receiver against the pre-characterized spatial feature distribution, the receiver’s location can be estimated. Several works modify light sources to produce such spatially encoded patterns, enabling localization via analysis of the received signal features.

(2) Localization Based on Camera Imaging Geometry

Due to rectilinear light propagation, in standard camera imaging models, each object point, the camera’s optical center, and its corresponding image point lie on the same straight line. When a camera with known intrinsic parameters—such as focal length and pixel size—captures images of multiple object points at known 3D positions, geometric constraints among these object–image point correspondences allow estimation of the camera’s 3D pose (position and orientation). Some studies localize a camera by imaging multiple light sources at known spatial positions. Others design visual markers for Simultaneous Localization and Mapping (SLAM), enabling estimation of the relative pose between the camera and marker. Given the camera’s known pose, the marker’s 3D position can then be inferred. However, current geometry-based methods rely heavily on accurate camera intrinsic parameters, requiring prior camera calibration to determine focal length, pixel size, and other parameters.

Light-Source Identity Recognition

By embedding identity-related features into light sources, their identities can be recognized through detection and classification of those features. One study assigns distinct flickering frequencies to different light sources; a camera’s rolling shutter captures these frequencies as stripes of varying widths, enabling frequency—and thus identity—estimation from stripe width. Another work encodes identities using distinct polarization directions and leverages optical rotary dispersion to detect polarization orientation for identity recognition.

Moreover, inherent hardware imperfections cause light sources to emit signals carrying unique, source-specific intrinsic features. Researchers have demonstrated light-source identification by pre-characterizing these intrinsic features and subsequently matching them during operation.

Summary

Visible-light-based sensing enables light-source identity recognition via emission characteristics, and—using SLAM markers or image-recognition techniques—also supports identity recognition of generic objects or humans. However, existing device-localization methods face practical limitations: approaches relying on spatial distribution characteristics require customized modifications to light-emitting hardware, complicating system deployment; geometry-based methods demand simultaneous imaging of multiple object points at known 3D positions and depend critically on precise measurement of camera intrinsic parameters. To achieve high localization accuracy, such methods typically place known reference points far apart, making it cumbersome for users to ensure all required points appear within the camera’s field of view. Similarly, existing methods for localizing generic objects or humans rely on camera-based SLAM marker localization and thus also require precise intrinsic parameters. Nominal camera specifications often deviate significantly from actual values; since object distance is typically hundreds of times larger than image distance, even minor errors in intrinsic parameters induce substantial localization errors. Furthermore, intrinsic parameters change whenever the camera’s focal length is adjusted. Frequent recalibration across diverse application scenarios incurs prohibitive labor costs.

This chapter addresses the need for visible-light-based device identity and position sensing by introducing two key topics: (1) active device localization using visible-light signals, and (2) passive tag localization and pose estimation. Through these topics, readers will gain foundational understanding of concrete implementation methods for visible-light sensing.

References

  1. Lingkun Li, Pengjin Xie, Jiliang Wang. "RainbowLight: Towards Low Cost Ambient Light Positioning with Mobile Phones", ACM MOBICOM 2018.
  2. Pengjin Xie, Lingkun Li, Jiliang Wang, Yunhao Liu. "LiTag: localization and posture estimation with passive visible light tags", ACM SenSys 2020.