Why My Autonomous Vehicle Project Is Different From Existing Solutions
- Raffay Hassan
- Mar 12
- 4 min read
Autonomous driving has become one of the most rapidly developing areas of robotics and artificial intelligence. Companies such as Waymo, Tesla, and Mobileye have spent years developing sophisticated perception systems capable of detecting objects, predicting movement, and making complex driving decisions.
At first glance, building an autonomous vehicle perception system might appear to simply replicate what these companies have already achieved. However, my final-year project takes a different perspective.
Rather than attempting to recreate a full autonomous driving platform, the goal of my project is to explore a sensor-driven digital twin architecture for predictive collision detection using embedded hardware.
The main question behind the project is simple:
How can a vehicle predict a potential collision before it happens?
Industrial Autonomous Driving Systems
Modern autonomous driving platforms rely on complex software stacks consisting of multiple components including perception, prediction, planning, and vehicle control [1]. These systems integrate multiple sensors such as cameras, LiDAR, and radar to create a detailed understanding of the surrounding environment.
Large-scale systems use this information to make driving decisions in real time. For example, modern autonomous vehicles rely heavily on multi-sensor perception pipelines because each sensor provides different types of information about the environment [2].
Cameras provide semantic understanding and object classification
LiDAR provides precise distance measurements
Radar provides velocity information and performs well in poor visibility
These sensing modalities work together to improve environmental awareness.
However, most industrial systems aim to solve the entire autonomous driving problem, which includes navigation, path planning, and vehicle control.
My project focuses on a smaller but equally important aspect of autonomous safety:
predictive collision detection.
A Safety-Focused Architecture
Instead of implementing a full autonomous driving stack, the system focuses specifically on identifying potential collision risks.
The system integrates three complementary sensors:
Camera for visual object detection using YOLO
LiDAR for obstacle distance measurement
Radar for relative velocity estimation
By combining these sensors, the system calculates a safety metric known as Time-To-Collision (TTC).
TTC estimates the amount of time remaining before a collision would occur if both the vehicle and obstacle maintain their current speeds. It is widely used in advanced driver-assistance systems because it allows vehicles to anticipate hazards before they become critical [3].
The Digital Twin Perspective
One of the most interesting aspects of the project is how the digital twin concept is implemented.
A digital twin is commonly defined as a dynamic digital representation of a physical system that continuously updates using real-world data [4]. Digital twins have been widely used in manufacturing, infrastructure monitoring, and transportation systems.
In many robotics projects, simulation environments are mistakenly considered digital twins. However, in my project the digital twin is implemented differently.
Instead of being a simulator, the digital twin exists as an internal world model within the system architecture.
The simplified architecture looks like this:
Sensors (Camera + LiDAR + Radar)
↓
Perception Layer
↓
Digital Twin (World Model)
↓
TTC Risk Evaluation
↓
Collision DecisionThis world model continuously updates based on sensor observations, creating a digital representation of the vehicle’s surroundings.
Unlike a simulator, which generates artificial environments, the digital twin mirrors the real environment using live sensor data.
Distributed Edge Computing
Another interesting feature of the system is its distributed computing architecture.
Instead of running every process on a single computer, the system is distributed across two embedded devices.
Device | Function |
Jetson Orin Nano | Computer vision processing, LiDAR interpretation, digital twin reasoning |
Raspberry Pi 5 | Radar processing and communication |
Distributed processing is increasingly common in robotics and autonomous systems because it allows different hardware platforms to handle specialised tasks more efficiently [2].
Comparison with Existing Systems
To better understand how this project differs from existing approaches, it is useful to compare the architecture with well-known autonomous driving systems.
System | Sensors Used | Computing Architecture | Primary Objective | Key Differences from This Project |
Waymo Driver | Camera, LiDAR, Radar | High-performance autonomous vehicle computing systems | Full autonomous driving | Requires powerful hardware and large datasets |
Tesla Vision | Cameras only | End-to-end neural network perception | Vision-based autonomy | No LiDAR or radar sensing |
Mobileye ADAS | Camera and radar | Automotive embedded platforms | Driver assistance systems | Designed for commercial vehicle safety features |
Simulation-only research | Simulated sensors | High-performance computing environments | Algorithm development | Often not integrated with real hardware |
This Project | Camera, LiDAR, Radar | Distributed embedded edge computing (Jetson + Raspberry Pi) | Predictive collision detection using TTC and a digital twin | Low-cost prototype focused on safety reasoning |
Why This Matters
The aim of this project is not to compete with large industrial autonomous driving systems. Instead, it demonstrates how key ideas such as sensor fusion, digital twins, and predictive safety metrics can be implemented using relatively simple hardware.
By focusing specifically on predictive collision detection, the system explores how autonomous safety systems can reason about potential hazards rather than simply reacting to obstacles.
This makes the project less about full autonomy and more about understanding how intelligent safety mechanisms can be designed and tested using embedded robotics platforms.
References
[1] S. Grigorescu, B. Trasnea, T. Cocias and G. Macesanu, “A Survey of Deep Learning Techniques for Autonomous Driving,” Journal of Field Robotics, vol. 37, no. 3, pp. 362–386, 2020.
[2] C. Badue et al., “Self-Driving Cars: A Survey,” Expert Systems with Applications, vol. 165, 2021.
[3] S. Lefèvre, D. Vasquez and C. Laugier, “A Survey on Motion Prediction and Risk Assessment for Intelligent Vehicles,” Robotics and Autonomous Systems, vol. 62, no. 9, pp. 1275–1302, 2014.
[4] S. Tao, Q. Qi, A. Liu and A. Kusiak, “Digital Twins and Cyber–Physical Systems toward Smart Manufacturing and Industry 4.0,” Engineering, vol. 5, no. 4, pp. 653–661, 2019.



Comments