Subtopic Deep Dive
Sensor Fusion for Autonomous Driving
Research Guide
What is Sensor Fusion for Autonomous Driving?
Sensor fusion for autonomous driving combines data from LiDAR, radar, and cameras using probabilistic methods like Kalman filters and deep learning for robust environmental perception.
This subtopic addresses fusing multi-modal sensor inputs to handle sensor failures and adverse weather. Key surveys include Yeong et al. (2021) reviewing sensor fusion technologies (719 citations) and Feng et al. (2020) on deep multi-modal detection (1250 citations). Over 20 papers from the provided lists discuss fusion methods and challenges in autonomous vehicles.
Why It Matters
Sensor fusion enables reliable object detection and scene understanding critical for safety in autonomous driving, as shown in Yeong et al. (2021) which reviews fusion for obstacle detection in varying conditions. Feng et al. (2020) demonstrate multi-modal deep learning improves semantic segmentation accuracy by 15-20% over single sensors. Yurtsever et al. (2020) highlight fusion's role in reducing ADS fatalities through robust perception (1602 citations).
Key Research Challenges
Adverse Weather Degradation
Sensors degrade in rain, fog, or snow, reducing fusion reliability. Yeong et al. (2021) note LiDAR scattering and camera blur as primary issues. Robust fusion requires weather-adaptive models.
Sensor Failure Handling
Individual sensor outages demand graceful degradation in fusion pipelines. Li et al. (2013) describe real-time drivable region detection under failures using sensor fusion. Probabilistic methods like Bayesian fusion address uncertainty.
Multi-Modal Alignment
Synchronizing LiDAR, radar, and camera data streams poses calibration challenges. Feng et al. (2020) survey deep learning methods for cross-modal feature alignment. Real-time processing limits fusion accuracy.
Essential Papers
A Survey of Autonomous Driving: <i>Common Practices and Emerging Technologies</i>
Ekim Yurtsever, Jacob Lambert, Alexander Carballo et al. · 2020 · IEEE Access · 1.6K citations
Automated driving systems (ADSs) promise a safe, comfortable and efficient\ndriving experience. However, fatalities involving vehicles equipped with ADSs\nare on the rise. The full potential of ADS...
A Multiagent Approach to Autonomous Intersection Management
Kurt Dresner, Peter Stone · 2008 · Journal of Artificial Intelligence Research · 1.3K citations
Artificial intelligence research is ushering in a new era of sophisticated, mass-market transportation technology. While computers can already fly a passenger jet better than a trained human pilot,...
Deep Multi-Modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges
Di Feng, Christian Schütz, Lars Rosenbaum et al. · 2020 · IEEE Transactions on Intelligent Transportation Systems · 1.3K citations
Recent advancements in perception for autonomous driving are driven by deep learning. In order to achieve robust and accurate scene understanding, autonomous vehicles are usually equipped with diff...
A survey on motion prediction and risk assessment for intelligent vehicles
Stéphanie Lefèvre, Dizan Vasquez, Christian Laugier · 2014 · ROBOMECH Journal · 1.1K citations
Junior: The Stanford entry in the Urban Challenge
Michael Montemerlo, Jan Becker, Suhrid Bhat et al. · 2008 · Journal of Field Robotics · 993 citations
Abstract This article presents the architecture of Junior, a robotic vehicle capable of navigating urban environments autonomously. In doing so, the vehicle is able to select its own routes, percei...
Planning and Decision-Making for Autonomous Vehicles
Wilko Schwarting, Javier Alonso–Mora, Daniela Rus · 2018 · Annual Review of Control Robotics and Autonomous Systems · 879 citations
In this review, we provide an overview of emerging trends and challenges in the field of intelligent and autonomous, or self-driving, vehicles. Recent advances in the field of perception, planning,...
Deep Reinforcement Learning framework for Autonomous Driving
Ahmad EL Sallab, Mohammed Abdou, Etienne Perot et al. · 2017 · Electronic Imaging · 809 citations
Reinforcement learning is considered to be a strong AI paradigm which can be used to teach machines through interaction with the environment and learning from their mistakes. Despite its perceived ...
Reading Guide
Foundational Papers
Start with Li et al. (2013) for sensor-fusion drivable region detection in challenging scenarios; Montemerlo et al. (2008, Junior) for early urban fusion architecture; Leonard et al. (2008) for perception-driven fusion without maps.
Recent Advances
Study Yeong et al. (2021) for comprehensive sensor fusion review; Feng et al. (2020) for deep multi-modal methods; Yurtsever et al. (2020) for emerging ADS practices including fusion.
Core Methods
Core techniques: Kalman/Extended Kalman filters for probabilistic fusion (Yeong 2021); deep CNNs for multi-modal feature fusion (Feng 2020); Bayesian occupancy grids for uncertainty handling (Li 2013).
How PapersFlow Helps You Research Sensor Fusion for Autonomous Driving
Discover & Search
Research Agent uses searchPapers('sensor fusion Kalman filter autonomous driving') to find Yeong et al. (2021), then citationGraph reveals 719 citing papers on fusion tech. findSimilarPapers on Feng et al. (2020) uncovers 15+ multi-modal detection works. exaSearch queries 'LiDAR radar camera fusion weather robust' for 50+ targeted results.
Analyze & Verify
Analysis Agent applies readPaperContent on Yeong et al. (2021) to extract Kalman filter implementations, then verifyResponse with CoVe checks fusion accuracy claims against datasets. runPythonAnalysis simulates sensor fusion with NumPy/pandas on KITTI data for error metrics. GRADE grading scores Feng et al. (2020) methods at A for multi-modal benchmarks.
Synthesize & Write
Synthesis Agent detects gaps in weather-robust fusion from Yeong et al. (2021) and Feng et al. (2020), flags contradictions in failure handling. Writing Agent uses latexEditText for fusion algorithm sections, latexSyncCitations integrates 20 papers, latexCompile generates PDF. exportMermaid creates sensor fusion pipeline diagrams.
Use Cases
"Compare Kalman vs deep learning fusion performance on nuScenes dataset"
Research Agent → searchPapers → Analysis Agent → runPythonAnalysis (pandas/NumPy repro fusion metrics from Yeong 2021) → GRADE verification → researcher gets CSV of error rates and plots.
"Write LaTeX review on multi-modal sensor fusion challenges"
Research Agent → citationGraph (Feng 2020) → Synthesis → gap detection → Writing Agent → latexEditText + latexSyncCitations (10 papers) + latexCompile → researcher gets compiled PDF review.
"Find GitHub code for LiDAR-camera fusion implementations"
Research Agent → paperExtractUrls (Li 2013) → paperFindGithubRepo → githubRepoInspect → Code Discovery workflow → researcher gets 5 repos with fusion code, README summaries, install commands.
Automated Workflows
Deep Research workflow scans 50+ papers via searchPapers on 'sensor fusion autonomous', structures report with Yeong (2021) as anchor, outputs hierarchical summary with citation counts. DeepScan applies 7-step analysis: readPaperContent → CoVe verify → runPythonAnalysis on fusion sims from Feng (2020). Theorizer generates hypotheses on Bayesian fusion improvements from Lefèvre (2014) motion prediction gaps.
Frequently Asked Questions
What is sensor fusion in autonomous driving?
Sensor fusion integrates LiDAR, radar, and camera data using Kalman filters or deep learning for accurate perception, as defined in Yeong et al. (2021).
What are common fusion methods?
Methods include probabilistic Kalman filters, Bayesian networks, and deep multi-modal networks per Feng et al. (2020) and Li et al. (2013).
What are key papers on sensor fusion?
Yeong et al. (2021, 719 citations) reviews fusion tech; Feng et al. (2020, 1250 citations) covers deep multi-modal detection; Li et al. (2013, 369 citations) presents drivable region fusion.
What are open problems in sensor fusion?
Challenges include adverse weather robustness, real-time multi-modal alignment, and sensor failure recovery, noted in Yeong et al. (2021) and Feng et al. (2020).
Research Autonomous Vehicle Technology and Safety with AI
PapersFlow provides specialized AI tools for Engineering researchers. Here are the most relevant for this topic:
AI Literature Review
Automate paper discovery and synthesis across 474M+ papers
Paper Summarizer
Get structured summaries of any paper in seconds
Code & Data Discovery
Find datasets, code repositories, and computational tools
AI Academic Writing
Write research papers with AI assistance and LaTeX support
See how researchers in Engineering use PapersFlow
Field-specific workflows, example queries, and use cases.
Start Researching Sensor Fusion for Autonomous Driving with AI
Search 474M+ papers, run AI-powered literature reviews, and write with integrated citations — all in one workspace.
See how PapersFlow works for Engineering researchers