Part 1 – The Virtual Rover
If you’ve been following my projects, you know I like building the real stuff first and simulating it later—but this time I flipped the script. Before a single wheel touches the floor, I’m building a full simulation of a differential-drive robot inside Unity. The goal is simple: create a virtual testing ground that looks, feels, and reacts like the real robot will, and eventually let machine learning models drive it.
This is the first phase of what will become a bi-directional simulator—a live feedback loop between a physical robot and a virtual environment. The robot will send sensor data to Unity, Unity will visualize the world and simulate physics, and the two will learn from each other in real time.
The project lives here:
👉 macroflux/monad-unity-basic-rover
What this version does
Right now, the simulator focuses on the basics: a small, differential-drive rover that can roll around a procedurally generated terrain. The robot has:
Features
- Functional wheel colliders for real-world-style traction and friction
- Configurable sensors (ultrasonic, IMU, compass) attached to the chassis
- Physics-based control through Unity’s rigidbody system
- A simple control script that interprets velocity commands and simulates forward and rotational movement
It’s all kept lightweight and modular—future versions will let the robot read from external scripts (Python, Arduino, etc.) so it can drive itself using ML models or real sensor data.
The bigger plan
Phase 1 is all about foundation. I’m setting up the environment, controls, and data flow so that later, I can connect this to a physical robot through serial or Wi-Fi. The Unity side will act like a “digital twin,” receiving IMU and ultrasonic readings, visualizing them, and feeding back simulated sensor data to the real robot’s controller.
Future milestones will include:
-
- Terrain generation: Each run spawns a new, gently sloped world to navigate.
-
- Yaw, pitch, and roll physics: The robot’s stability will matter—if it tips too far, it stops.
-
- Predictive control: Using sensor feedback to prevent flips before they happen.
-
- Menu + UI system: A startup and pause menu for testing and restarting environments.
-
- External communication: Eventually, a socket or serial link for Arduino-based controllers.
Why start with simulation?
Because testing in reality is expensive, slow, and occasionally smoky. In simulation, you can blow up the physics a dozen times before lunch, tweak friction coefficients, or rewrite control logic without worrying about batteries or replacement parts. It’s also the perfect playground for trying out machine-learning-based navigation—without the fear of watching your bot yeet itself off a table.
Next steps
The next phase will tie this Unity environment to an external process—most likely Python—to begin bidirectional control. That’s where the “learning” part of Monad really starts: models trained in simulation, refined in the real world, and mirrored back again.
If you’re curious about how the simulation works, the GitHub repo is open and commented. You’ll find Unity scripts handling wheel motion, IMU emulation, and a simple controller input manager. More to come soon—including how the real-world Arduino-powered rover will sync with it.
