STRIDE: Training-Free Lane Detection from Fixed-Camera Surveillance
How do you detect lanes on roads that have no painted markings, from overhead surveillance cameras, without any training data? Our answer: let the vehicles draw the lanes themselves.
The Problem
Lane detection is a fundamental requirement for intelligent transportation systems (ITS). Yet existing solutions demand either expensive per-viewpoint deep learning models or depend on fragile trajectory-level analysis. Most lane detection research focuses on forward-facing ego-vehicle cameras with visible lane markings — but the real need is in oblique overhead surveillance cameras viewing roads that may lack any painted markings.
Our Approach: STRIDE
STRIDE (Spatio-Temporal Rule-based Identification and Direction Extraction) is a 13-stage rule-based pipeline that transforms raw vehicle detections into labeled lane regions with calibrated direction estimates — requiring zero training data. The key insight: in fixed-camera settings, vehicles themselves trace out lane boundaries through repeated traversal patterns.
Key Technical Innovations
- Confidence-Weighted Direction Accumulation: Instead of naive averaging, each vehicle detection contributes directional evidence weighted by detection confidence, building robust directional histograms over time.
- Directional Coherence Map: We use the Rayleigh statistic to quantify how consistently vehicles move in each spatial region, filtering noise from parking or U-turning vehicles.
- Edge-Preserving Guided-Filter Smoothing: Traditional Gaussian smoothing bleeds lane boundaries together. Our guided-filter approach preserves sharp density transitions between adjacent lanes.
- Von Mises Mixture Model (VMM): For direction-based lane separation, we employ VMM clustering — the circular-statistics equivalent of Gaussian mixture models — yielding statistically principled angular clustering.
Results
Evaluated on 47 video sequences from the UA-DETRAC dataset, with manually created lane-level ground truth (the first published dataset of its kind). STRIDE achieves:
- Mean Region IoU: 0.599 — outperforming classical baselines
- Mean Direction Error: 6.00 degrees
- Deployment Consistency: Top-3 finish in 83% of scenes
- Statistically significant improvement (p = 7.2 x 10^-4)
Why It Matters
STRIDE demonstrates that principled rule-based systems can deliver competitive performance for infrastructure-side perception tasks without any training data. This is particularly valuable for rapid deployment across thousands of cameras in smart city networks, where per-camera fine-tuning is impractical.
