top of page
Search

First Real Obstacle Tests: What the Data Shows

  • Writer: Raffay Hassan
    Raffay Hassan
  • 7 days ago
  • 6 min read

With the reactive controller in place and the thresholds corrected, two test sessions were run on 1 April 2026. The first session placed multiple obstacles across the car's path to force repeated avoidance manoeuvres over an extended run. The second was a single-obstacle test to validate a clean detection and escape cycle. Both sessions were logged to CSV and analysed after each run. This post walks through what the data shows.


The Test Setup


Test 1 placed multiple obstacles at various distances and angles across the car's path. The goal was to force the avoidance system to respond to consecutive obstacles and demonstrate that the retry and direction flip logic works across repeated encounters. The session ran for approximately 1000 seconds and produced 15,243 logged events.


Test 2 used a random obstacle, usually a bottle, placed directly ahead in the FC zone. The goal was a controlled, clean test of the full avoidance cycle from detection through escape in an uncluttered environment. This session ran for approximately 30 seconds and produced 985 logged events.



Plot 1 : LiDAR Zone Distances Over Time


Image 1:  plot1_lidar_distances
Image 1:  plot1_lidar_distances

This plot shows the distance readings from all three LiDAR zones FL left, FC centre, FR right over the full duration of each test session. The coloured background shading shows which motor state was active at each point in time.


In Test 1 the FC and FR distances drop repeatedly to near zero before recovering, corresponding to each obstacle encounter. The FL zone shows a persistent low reading throughout the session. This is the ribbon cable issue identified in earlier testing the IMX477 camera cable hangs loose on the front-left side of the chassis and enters the LiDAR scan plane, producing a constant false reading regardless of what is physically in front of the car. The STOP threshold line at 0.6m is crossed multiple times confirming the avoidance system was correctly triggered across the session.


In Test 2 the FR zone shows a sustained low reading consistent with the obstacle placed to the right of the car's path. The avoidance event is clearly visible as a brief period of BRAKING and REVERSING states before the car returns to NORMAL.


Plot 2 : Motor State Distribution


Image 2: plot2_motor_states
Image 2: plot2_motor_states

This bar chart shows how many logged events occurred in each motor state across both sessions.


In Test 1 the car spent 93.4% of logged events in NORMAL state with the remaining 6.6% spread across BRAKING, REVERSING, TURNING and ESCAPING. This confirms the system spends the vast majority of its time driving rather than manoeuvring. These are row counts not individual events each state runs for several seconds and logs a row every 50ms, so 496 REVERSING rows does not mean 496 separate reverses.


In Test 2 the distribution is cleaner with a single short avoidance cycle visible in the low counts of non-NORMAL states, confirming the single obstacle was handled in one clean sequence.


Plot 3 : Fusion Level Analysis


Image 3: plot3_fusion_levels
Image 3: plot3_fusion_levels

The pie charts show the proportion of time spent in each fusion level. The timeline below each pie shows how the fusion level changed over the course of the session.


Test 1 shows the system was in IMMINENT for 61.2% of the session and CAUTION for 34.8%. The persistent FL false reading from the ribbon cable is the primary reason for the high IMMINENT percentage the left zone was continuously reading below the IMMINENT threshold regardless of actual obstacle presence. Only 3.9% of the session was SAFE. In a correctly mounted sensor configuration this distribution would shift significantly toward SAFE during open driving.


Test 2 shows 95.2% IMMINENT, which again reflects the sensor mounting issue rather than actual obstacle density. The single obstacle test was brief enough that most of the logged time was captured during the approach and detection phase.


Plot 4: YOLO Camera Detections


Image 4: plot4_yolo_detections
Image 4: plot4_yolo_detections

This chart shows the object classes detected by the YOLO camera model across

both sessions.


An important note about these results: the test obstacles were random everyday objects which are not formal driving hazards that YOLO was specifically trained to flag. YOLO runs the COCO dataset model which contains 80 general object classes. When it encounters an unfamiliar object it assigns the closest matching known class. As a result the detections shown here bottle, vase, book, laptop and others are the nearest COCO class match rather than a precise semantic classification of the actual obstacle.


This is expected behaviour. The camera's role in this system is cross-validation it confirms whether a LiDAR-detected obstacle is a real physical object rather than a noise return. For that purpose the exact class label matters less than whether a detection exists at all.


Radar shows zero detections in both sessions. This is also expected. The radar pipeline applies Moving Target Indicator filtering which suppresses static reflections. Since all test obstacles were stationary objects with near-zero velocity they were correctly filtered out by the MTI stage. The radar is designed to detect moving objects and TTC threats static obstacles are intentionally suppressed.


Plot 5: LiDAR Distance Distribution per Zone


Image 5: plot5_lidar_histograms
Image 5: plot5_lidar_histograms

These histograms show the full distribution of distance readings in each zone across the session.


In Test 1 the FL zone histogram shows a heavy concentration of readings between 0.33m and 0.45m across the entire session. This is the ribbon cable false reading showing up as a persistent spike at close range regardless of actual obstacles. The FC and FR histograms show a more natural spread with peaks at various distances corresponding to real obstacle encounters at different ranges.


In Test 2 the FR zone shows most readings clustered below 1.0m consistent with the obstacle placed to the right. The FL false reading is visible here too as a cluster around 0.4m. The STOP threshold at 0.6m and STEER threshold at 0.8m are marked on each histogram to show how many readings fell within the active zones.


Plot 6 : Avoidance Event Summary Table


Image 6 : plot6_summary_table
Image 6 : plot6_summary_table

This table summarises the key quantitative metrics from each session side by side.

Test 1 produced 15 actual braking events across a 1000-second session, giving an encounter rate of roughly one obstacle event every 67 seconds. 13 of those 15 events progressed all the way through to ESCAPING, giving an 86.7% avoidance success rate. The remaining 2 events did not complete the full cycle either a direction flip was triggered mid-sequence or the path cleared before reversing was required.


Test 2 produced a single braking event which completed successfully, giving a 100% success rate for the single-obstacle scenario.


Plot 7: System Performance Metrics


Image 7: performance_metrics
Image 7: performance_metrics

This chart brings the most important performance indicators together as percentages for direct comparison between the two sessions.

The avoidance success rate of 86.7% in Test 1 and 100% in Test 2 confirms the avoidance logic completing correctly across most encounters. The drive time efficiency of 93% in both sessions confirms the car spends the large majority of its time in useful forward motion rather than manoeuvring.


LiDAR coverage is 87.2% in Test 1 and 100% in Test 2. The lower coverage in Test 1 reflects occasional frames where no valid forward-zone points survived the filtering pipeline, most likely caused by the ribbon cable occupying scan rays.

YOLO coverage of 14% in Test 1 and 53% in Test 2 reflects the camera detecting objects in roughly that proportion of frames. The higher rate in Test 2 is consistent with the car spending more of a shorter session in close proximity to a single obstacle directly ahead.


Radar coverage is 0% in both sessions as expected for static obstacles.

The obstacle response rate of 93.3% in Test 1 indicates that of the 15 braking events, 14 proceeded through to a full REVERSING event, confirming the car committed to the complete avoidance manoeuvre in nearly all cases.


The most important outstanding hardware fix remains the ribbon cable routing. Once the FL false reading is resolved the fusion level distribution will reflect actual obstacle encounters rather than a permanent sensor impairment, and the performance metrics will give a cleaner picture of how the system behaves in genuinely open space.




Multiple Test Clips :




The Video is combination of different tests conducted and the 2 tests explained in this blog are taken from these clips the logs were saved during the tests.




 
 
 

Comments


  • LinkedIn

The Burroughs, London

NW4 4BT

Autonomous Systems, Sensor Fusion, Digital Twins

 

© 2026 by Department of Science and Technology

 

bottom of page