1. Home
  2. Docs
  3. Terminology & Measur...
  4. A. Key Metrics
  5. Glass to Glass Latency

Glass to Glass Latency

Glass-to-glass latency refers to the total delay between capturing eye position (via an eye-tracking camera) and displaying the corresponding 3D image on the screen. This metric is critical for motion-to-photon (M2P) synchronization, as excessive latency causes misalignment between viewer movement and the displayed viewpoint, leading to judder, motion sickness, and degraded 3D immersion.

Breakdown of Latency Sources

The end-to-end pipeline consists of multiple stages, each contributing to total latency:

  1. Eye-Tracking Camera Capture (1-5ms)
    • Time to acquire an IR/near-eye image of the viewer’s pupils.
    • Depends on: Sensor readout speed (global vs. rolling shutter), exposure time, and frame rate (e.g., 120Hz vs. 240Hz).
  2. Eye-Tracking Processing (5-20ms)
    • Pupil detection, gaze vector estimation, and filtering (e.g., Kalman filters for smooth pursuit).
    • Depends on: Algorithm complexity (classical vs. deep learning) and hardware (CPU/GPU/ASIC).
  3. 3D View Synthesis (5-50ms)
    • Generating the correct perspective image for the current eye position.
    • Depends on:
      • Depth-based rendering (DBR): Warping existing views (~5ms).
      • Neural rendering (e.g., NeRF): Higher quality but slower (~50ms).
  4. Weaving/Interlacing (1-10ms)
    • Combining left/right views for lenticular/barrier displays.
    • Depends on: Display controller (FPGA vs. software).
  5. Display Refresh (0-16.7ms @60Hz)
    • Frame buffer delay (worst case: full frame at 60Hz = 16.7ms).

Total Latency & Acceptable Thresholds

  • <20ms → Imperceptible (ideal for VR/AR).
  • 20-50ms → Noticeable but tolerable (common in eye-tracked autostereoscopy).
  • >50ms → Causes visible lag and discomfort.

Measurement Methods

1. Photodiode + High-Speed Camera (Gold Standard)

  • Setup:
    • IR LED sync pulse triggers eye-tracking camera.
    • Photodiode detects display update.
    • High-speed camera (1000+ FPS) records the time difference.
  • Pros: Direct, accurate.
  • Cons: Expensive, requires lab setup.

2. Embedded Timestamping (Software-Based)

  • Method:
    • Timestamp eye-tracking frames and display frames.
    • Cross-correlate logs to compute latency.
  • Pros: No extra hardware.
  • Cons: Less precise (OS scheduling delays).

3. Pursuit Eye Motion Test (Perceptual Validation)

  • Method:
    • User tracks a moving target while system records perceived lag.
    • Subjective but useful for real-world validation.

Optimization Techniques

  • Eye-Tracking ASICs (e.g., Tobii) → Faster pupil detection.
  • FPGA-Based Rendering → Bypass CPU/GPU bottlenecks.
  • Predictive Gaze Estimation → Compensate for latency with motion prediction.
  • Low-Persistence Displays → Reduce motion blur during fast updates.

Industry Benchmarks

SystemLatency (ms)Technology
Varjo XR-4~15Eye-tracked DBR
Looking Glass 8K~30Light field (no tracking)
Custom Autostereo (FPGA)~25Eye-tracked + lenticular

Conclusion

Reducing glass-to-glass latency is crucial for comfortable, dynamic autostereoscopic 3D. While <20ms is ideal, most systems today operate in the 20-50ms range. Advanced eye-tracking hardware, predictive algorithms, and dedicated rendering pipelines are key to minimizing lag. Future improvements in neural rendering acceleration and low-latency displays (e.g., microLED) may push this closer to imperceptible levels.

How can we help?