A Data-Driven Sensor Fusion Model for Advanced Driver Assistance Systems

Authors

  • M. Swathi Author
  • Banoth Venkatesh Author
  • Bathula Hepsiba Author
  • Gouda Sai Charan Author
  • Ande Pranav Kumar Author

DOI:

https://doi.org/10.64751/ijdim.2026.v5.n2(1).698

Keywords:

Adaptive Cruise Control, Multi-Sensor Integration, Vision Sensors, Traffic Density Estimation, Intelligent Transportation Systems

Abstract

The rapid advancement of intelligent transportation systems and autonomous driving technologies has increased the importance of reliable perception mechanisms in Advanced Driver Assistance Systems (ADAS). Traditionally, vehicle perception relied on manual driving and basic rule-based systems with limited sensor integration, which often resulted in incomplete understanding of the environment. These conventional approaches were unable to effectively handle complex and dynamic driving conditions due to lack of intelligent data processing and adaptability. Over time, the evolution of machine learning enabled improved analysis of sensor data, yet challenges such as data imbalance, feature complexity, and limited accuracy persisted. The primary problem addressed in this research is the accurate classification of driving actions using heterogeneous sensor data under real-world conditions. Traditional systems suffer from issues such as poor generalization, high error rates, and inability to process large-scale data efficiently. There is a strong need for an advanced framework that can integrate multiple data sources, extract meaningful features, and provide reliable predictions. To address these challenges, this study presents a structured framework that combines machine learning models such as Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Random Forest (RF) with a hybrid approach, DualStream-ConvRF (DCRF). The system incorporates data preprocessing, feature extraction, and model evaluation to enhance prediction performance. The proposed DCRF combines with Convolutional Neural Network (CNN) and RF model, where CNN is used for feature extraction and RF for classification, achieved an accuracy of 95.35%, outperforming all baseline models. The significance of this research lies in its ability to enhance perception accuracy, improve road safety, and contribute to the development of intelligent and autonomous driving systems.

Downloads

Published

2026-04-10

How to Cite

M. Swathi, Banoth Venkatesh, Bathula Hepsiba, Gouda Sai Charan, & Ande Pranav Kumar. (2026). A Data-Driven Sensor Fusion Model for Advanced Driver Assistance Systems. International Journal of Data Science and IoT Management System, 5(2(1), 230-242. https://doi.org/10.64751/ijdim.2026.v5.n2(1).698

Similar Articles

1-10 of 484

You may also start an advanced similarity search for this article.