Project Setup Finalisation
- Raffay Hassan
- Feb 11
- 2 min read
Sensor-Driven Digital Twin for Collision Prevention
The infrastructure for the Sensor-Driven Digital Twin project has now been fully finalised. With the system architecture clearly defined and all major components allocated to dedicated hardware, the project is ready to transition into structured implementation.
The setup phase focused on establishing a scalable, low-latency architecture capable of supporting both simulation-based testing and real sensor-driven operation.
Dedicated Linux GPU Environment
A standalone Linux machine equipped with:
NVIDIA GPU
1TB storage
Desktop display environment
Laboratory portability
has been allocated specifically for this project.
This dedicated system allows:
GPU-accelerated simulation via CARLA
Centralised Digital Twin processing
Sensor fusion computation
Direct physical sensor connectivity in lab settings
This removes the constraints typically associated with virtual machines and shared environments.
CARLA Deployment
CARLA has been installed using the official precompiled Linux binary and is running directly on the GPU machine.
Within this project, CARLA is used as:
A controlled scenario generator
A simulated sensor data source
A visualisation environment for Digital Twin updates
Importantly, CARLA is not treated as the Digital Twin itself.The Digital Twin logic operates independently and updates its internal environment model using either simulated or real sensor data.

Communication Architecture (TCP-Based)
Rather than using ROS middleware, the system uses a lightweight TCP socket-based communication layer.
This decision was made to:
Minimise system complexity
Reduce integration overhead
Lower latency between sensor devices and the Digital Twin
Maintain full control over message structure
The architecture now follows this structure:
Jetson Nano (YOLO perception)
↓ TCP
Raspberry Pi (LiDAR & Radar)
↓ TCP
Central GPU Server
↓
Digital Twin Core
↓
CARLA Visualisation
Each sensor device sends structured JSON messages over TCP to the central server. The Digital Twin continuously updates its world state based on incoming data streams.
This approach provides:
Clean separation of components
Low communication overhead
Greater flexibility
Easier debugging and control
Sensor Infrastructure
The sensing hardware is structured as follows:
Jetson Nano → Camera input and YOLO-based object detection
Raspberry Pi → LiDAR acquisition and radar readings
GPU Server → Digital Twin logic, fusion algorithms, safety evaluation
This distributed sensing architecture allows edge-level processing while keeping environment modelling centralised.
Digital Twin Emphasis
The Digital Twin remains the core intelligence layer of the system.
It maintains:
Dynamic object representation
Static hazard markers
Short-term obstruction memory
Time-stamped environment updates
This enables:
Predictive risk evaluation
Time-To-Collision modelling
Reuse of hazard knowledge for subsequent vehicles
Rather than relying on reactive frame-by-frame perception, the system supports environment-level reasoning.
Three Operational Modes
The finalised architecture supports three modes:
Simulation-Driven Mode – CARLA sensor data updates the Digital Twin.
Sensor-Driven Mode – Real sensor data updates the Digital Twin via TCP.
Shared Hazard Awareness Mode – Hazards detected by one vehicle are stored in the Digital Twin and reused to inform subsequent vehicles.
This structured separation ensures controlled experimentation and clear evaluation metrics.
Transition to Implementation
With the infrastructure, simulator deployment, communication layer, and sensing hardware configuration finalised, the project now progresses into:
Sensor fusion development
Time-To-Collision computation
Hazard persistence modelling
Experimental scenario design
The system foundation is now fully established, providing a stable platform for implementing the core sensor-driven digital twin logic.



Comments