AUTONOMOUS VEHICLE NAVIGATION ENHANCED WITH SSLABASED LANE AND SIGN DETECTION
Abstract
The navigation of autonomous vehicles depends significantly on correct environmental perception, especially the exact identification of traffic signs and lane markings. Conventional computer vision methods often underperform in dynamic driving scenarios because of fluctuating illumination, obstructions, and unpredictable road conditions. This research introduces an innovative method using a Smart Semantic Layered Architecture (SSLA) to improve real-time lane and traffic sign identification for autonomous cars. SSLA amalgamates spatial, contextual, and temporal semantic layers with deep learning detection models to extract and integrate pertinent visual data. Utilising this multi-layered structure, the system enhances perceptual robustness, hence improving navigation safety and decisionmaking precision. Experimental findings on benchmark datasets indicate enhanced performance in detection accuracy, processing speed, and environmental adaptation, highlighting the promise of SSLA as a fundamental perception module in autonomous driving systems