top of page
All Posts


YOLOv5n vs YOLOv8n: Generational Performance Evaluation in CARLA Simulation
Phase: 1 (Simulation Extended Model Evaluation) Focus: Architecture comparison + adverse weather robustness + real-time stability Overview Following the initial YOLO implementation documented in the collision avoidance scenarios, this analysis compares two generational models: YOLOv5n (2020) and YOLOv8n (2023). The objective is to evaluate whether the newer YOLOv8 architecture provides meaningful improvements over the mature YOLOv5 for real-time collision detection. Both m
Raffay Hassan
6 days ago8 min read


First Real Obstacle Tests: What the Data Shows
With the reactive controller in place and the thresholds corrected, two test sessions were run on 1 April 2026. The first session placed multiple obstacles across the car's path to force repeated avoidance manoeuvres over an extended run. The second was a single-obstacle test to validate a clean detection and escape cycle. Both sessions were logged to CSV and analysed after each run. This post walks through what the data shows. The Test Setup Test 1 placed multiple obstacles
Raffay Hassan
7 days ago6 min read


Rethinking the Avoidance Logic: A Smarter, More Fluid Approach
The previous post documented getting everything off the bench and onto the car for the first time. The state machine worked in the sense that the car did reverse, turn, and escape. But there was a persistent problem: in anything tighter than a wide open space, the car got confused. It would reverse correctly, then get stuck trying to turn in one direction repeatedly. It would glide toward an obstacle and not stop in time. The root cause was not a single bug. It was an archite
Raffay Hassan
7 days ago4 min read


From Bench to Car: The System That Finally Drives Itself
Image 1 : RC Car with Components The previous post documented bench testing in full. The LiDAR was tuned with a persistence filter and a narrow forward FOV. The radar pipeline had MTI clutter suppression, CFAR detection, and a nearest-neighbour tracker. YOLO was running on CUDA with bounding-box distance estimation. The sensor health monitoring layer was built. Everything was sitting on a desk pointing at a wall. The next step was getting it on the car. That sentence sounds s
Raffay Hassan
Mar 306 min read
Why CARLA Is Not the Digital Twin in My Project
When the term digital twin is mentioned in autonomous vehicle research, many people assume it refers to a simulation environment. In many cases, this leads to the assumption that the CARLA simulator itself acts as the digital twin. However, in my project this interpretation is not entirely accurate. CARLA plays an important role in the development process, but it is not the digital twin. Understanding this distinction was one of the most important conceptual steps in the pro
Raffay Hassan
Mar 122 min read
Why My Autonomous Vehicle Project Is Different From Existing Solutions
Autonomous driving has become one of the most rapidly developing areas of robotics and artificial intelligence. Companies such as Waymo, Tesla, and Mobileye have spent years developing sophisticated perception systems capable of detecting objects, predicting movement, and making complex driving decisions. At first glance, building an autonomous vehicle perception system might appear to simply replicate what these companies have already achieved. However, my final-year project
Raffay Hassan
Mar 124 min read
Tuning LiDAR, Radar, and Adding Camera-Based Distance to the Stack
The hardware is all there. The LiDAR is spinning, the radar is streaming tracks, and everything lands in one dashboard on the Jetson. But getting the sensors to a point where they actually mean something where a warning is a warning and silence is silence that took a lot longer than getting them plugged in. This post covers what I changed on the LiDAR filtering, the radar signal processing pipeline, how YOLO estimates distance from bounding boxes, and the sensor health monit
Raffay Hassan
Mar 1011 min read


YOLOv8 on Jetson CUDA: Trying to Fix the LiDAR Noise Problem With a Camera
The LiDAR had a problem I couldn't ignore: it couldn't tell a real obstacle from a reflection. A shiny floor, a piece of glass, even the wrong angle of light all of them would produce a confident STOP alert from nothing. The persistence filter helped, but it didn't solve the root issue. The solution was to add a camera and make the LiDAR and camera cross-validate each other. An obstacle only triggers an alert if both sensors agree something is there. The LiDAR provides range,
Raffay Hassan
Mar 35 min read


Fusing LiDAR and Radar Into One Collision Decision
With the LiDAR running on the Jetson and the radar streaming tracks from the Pi, the next step was getting them talking to each other. Two independent sensors, two separate data streams, and I needed one unified answer: is the path ahead safe or not? Still all bench testing at this stage sensors on a desk pointed across the room. The goal here was to get the fusion logic, threading model and live GUI working correctly before anything goes near the actual car. How the Two Devi
Raffay Hassan
Mar 33 min read


Concept Milestone Poster Reflection Blog
Last week presented my research poster on “Sensor-Driven Digital Twin Framework for Collision Prevention in Autonomous Systems.” The poster focused on my investigation into improving perception reliability in autonomous vehicles through multi-sensor fusion and simulation-based validation. The main concept of my poster was to demonstrate how combining camera-based computer vision, LiDAR distance sensing, and mmWave radar velocity measurement can improve robustness compared t
Raffay Hassan
Mar 32 min read


Building the Sensor Foundation: LD06 LiDAR and 60 GHz Radar
I've been working on the hardware phase three sensors, two boards, and one unified dashboard to tie it all together. Before any of it goes near the actual car though, I wanted to build and validate each piece on the bench first. No point bolting things to a chassis if the software isn't working yet. This first post covers the two range sensors: the LD06 LiDAR and the BGT60TR13C 60 GHz radar on the DreamHAT+ board. They do very different things the LiDAR sees geometry, the rad
Raffay Hassan
Feb 275 min read


Phase 2 Real-World Hardware Platform for Autonomous Collision Avoidance
After developing and validating the perception algorithms in simulation using the CARLA autonomous driving environment, the project progressed to a real-world deployment on physical hardware. This phase focuses on transferring the digital twin concepts into a functioning autonomous system capable of sensing and reacting to real obstacles. Unlike simulation, real environments introduce sensor noise, communication delays, imperfect measurements, and physical constraints. Theref
Raffay Hassan
Feb 273 min read


YOLO Implementation and Performance Analysis Pipeline
Phase: 1 (Simulation Baseline) Focus: YOLOv8n inference + lane-relevance filtering + performance reporting Overview This blog documents the YOLO implementation used , including the offline playback pipeline designed to evaluate detection behaviour and real-time feasibility. The aim was not only to run detections, but to quantify performance (latency/FPS) and reduce irrelevant detections using a lane-focused filtering approach. Why YOLO in This Phase? YOLO provides fast obj
Raffay Hassan
Feb 163 min read


Collision Avoidance Scenarios with Multi-Sensor Fusion in CARLA
Phase: 1 (Scenario Baseline) Focus: Lane following + radar/LiDAR fusion + TTC + emergency braking + avoidance manoeuvre Overview This blog documents the Phase 1 scenario implementation of an autonomous collision avoidance pipeline in CARLA. The focus was to create repeatable scenarios, apply multi-sensor fusion, and validate safety behaviours (emergency braking and post-stop manoeuvres) under both normal and adverse weather conditions. Scenario Design (Town04) Town04 was se
Raffay Hassan
Feb 163 min read
What Is the Digital Twin in My Project?
One of the most important conceptual clarifications in my final-year project was understanding that the Digital Twin is not a separate software application like CARLA, Unreal Engine, or a commercial industrial platform. Instead, the Digital Twin is an architectural layer within the system. This distinction is fundamental to understanding the contribution of the project. Digital Twin ≠ Simulator It is easy to assume that CARLA itself is the Digital Twin, since it provides: A 3
Raffay Hassan
Feb 112 min read


Project Setup Finalisation
Sensor-Driven Digital Twin for Collision Prevention The infrastructure for the Sensor-Driven Digital Twin project has now been fully finalised. With the system architecture clearly defined and all major components allocated to dedicated hardware, the project is ready to transition into structured implementation. The setup phase focused on establishing a scalable, low-latency architecture capable of supporting both simulation-based testing and real sensor-driven operation. Ded
Raffay Hassan
Feb 112 min read


Weekly Reflection: Peer Feedback on My Project Idea
This week, I took part in a cross-disciplinary peer feedback session where students from different engineering departments discussed and reviewed each other’s project ideas. The aim of the activity was to gather constructive feedback and refine our project direction through discussion. I presented my research project on developing a sensor-driven digital twin for collision prevention, which combines computer vision, LiDAR, and mmWave radar to evaluate safety behaviour in simu
Raffay Hassan
Feb 51 min read
Research Day: Is Sensor Fusion Better Than Computer Vision Alone?
One of the key questions explored in this project is whether sensor fusion provides a more reliable collision-prevention strategy than relying solely on computer vision . With modern deep-learning models such as YOLO achieving strong real-time performance, it is reasonable to question whether additional sensors are necessary. Computer vision is highly effective at identifying what is present in the environment. Vision-based models can detect and classify vehicles, pedestria
Raffay Hassan
Feb 42 min read
Aims and Objectives
Project Aim The aim of this project is to design, implement, and evaluate a sensor-driven digital twin for collision prevention, with a particular focus on assessing whether multi-sensor fusion offers a more reliable and robust safety strategy than computer vision alone. The project uses simulation and scaled hardware experiments to enable safe, repeatable testing of perception and decision-making behaviour in autonomous systems. Project Objectives To achieve this aim, the pr
Raffay Hassan
Feb 41 min read
Starting My Research Project Journey: Building a Sensor-Driven Digital Twin for Vehicle Safety
Welcome to this blog. This space will document my final-year research project in Computer Systems Engineering , where I explore how digital twin technology can be used to improve safety in autonomous and semi-autonomous vehicles. Autonomous driving systems operate in safety-critical environments, where mistakes are costly and real-world testing is both risky and limited. One of the biggest challenges in this field is validating perception and decision-making systems reliably
Raffay Hassan
Jan 282 min read
bottom of page