← Back to Projects
RoboRacer vehicle on the track

RoboRacer Autonomous Vehicle

Building a 1/10-scale autonomous racing platform with URI's Autonomous Racing Team, targeting F1Tenth international competition. Trained ML models on LiDAR data to predict steering and speed, achieving R² > 0.95 for steering and > 0.99 for speed.

Pythonscikit-learnROS2LiDAR

RoboRacer is a 1/10-scale autonomous racing platform developed by the Autonomous Racing Team at URI, with the long-term goal of competing in the international F1Tenth autonomous racing competition. The vehicle integrates compute hardware with LiDAR and vision sensors, running a full ROS2 software stack for perception, state estimation, and real-time control.

As a deliverable for ELE491 (Machine Learning for Engineering Applications, Dr. Megan Chiovaro), I worked with a team of three to train ML models that predict steering angle and vehicle speed from LiDAR sensor data alone. We downsampled 1,080 LiDAR rays to 54 features, then engineered directional minimums (closest obstacle left, center, right) and temporal memory features (previous-step steering and speed) to capture spatial awareness and vehicle inertia. We also downsampled near-zero steering angles to prevent the model from overfitting to straight-line driving.

Starting from a Random Forest baseline (R² = 0.69 steering, 0.39 speed), we moved to Histogram Gradient Boosting and achieved final metrics of R² = 0.96 for steering angle and R² = 0.998 for vehicle speed — demonstrating that reactive driving behavior is highly learnable from perception data. We built interactive Gradio demos including a sketch-pad interface that converts user drawings into simulated LiDAR inputs for real-time prediction. The next step is scaling toward more complex neural models and deploying on the physical car for F1Tenth competition.

Built alongside Sean Cooper and Julian Tamayo.

Gallery

Gradio demo interfaceML model performance results