PiBot
I’ve had a raspberry pi gathering dust for a few years, but I had an inkling to do something cool with it. Over the holidays, I was gifted a PiStorms kit. PiStorms is a “shield” for the raspberry pi that fits directly onto a set of pins and interfaces with Lego Mindstorms motors and sensors.
Mindstorms is a pretty cool concept, but the entire set runs $350 and doesn’t actually allow full programmatic control over the resulting creation. It’s more of a drag and drop visual programming language.
I was able to download the OS image and flash it to my raspberry pi SD card. However, the PiStorms shield requires direct power, instead of utilizing the DC Power from the raspberry pi. This required a 6 AA battery pack and a set of brand new batteries.
Once the brain was operational, I still needed to build the vehicle. Mindstorms is supposedly compatible with all of the Lego Technic sets and parts, so I bought a basic four-wheeled mechanical car for 50 bucks from the lego store. I chose this model since it appeared to have a surplus of parts and lots of exposed anchor points for modification.
After following the directions for an evening, the car was assembled (aside from the cosmetic touches like stickers, etc). There were gears and axles propelling the rear wheel, and turning the front two. I was able to modify the frame to mount two medium mindstorms motors.
Lastly, I configured a mount for the entire PiStorms / raspberry pi assembly, which is relatively heavy. Once the OS boots up and the wireless connects, PiStorms has a php frontend website to display info allow control of the motors. You can connect to the site over wireless and remotely control the motors, driving and turning the car.
Some of the challenges:
Power
Once the Pi is wired up to batteries, it absolutely drains them. 6 AA batteries in series have about 9v. When PiStorms is turned on, you can see the voltage visibly decrease – 8.8…8.7…8.6. The motors and pi will cease to function once it drops to ~ 6.5V, so there’s very limited juice in the thing. I’m considering upgrading to use a rechargeable RC battery kit, which should extend the lifetime. But for remote robots, battery power is a real issue.
Mechanical Engineering
Perhaps one of the most fascinating challenges was engineering the mount points for the motors and the PiStorms brain. The axles themselves jutted from the frame at certain angles, forcing the orientation of the motors. The motors needed enough anchors in order by maintain torque and accessibility. The PiStorms assembly had to be elevated enough from the frame to ensure smooth turning. Problem solving the mechanical and structural issues was fascinating because it was so constrained. I only had limited parts and spatial real-estate, and the solution space was three dimensions. Choosing Lego as the base tech for building the robot was absolutely essential here. It would be much more difficult to experiment with models and configurations if the parts were permanently affixed metal.
Software Control
The robot as it stands now is hardly better than a cheap RC car. It’s able to drive forward and backward and turn the wheels remotely. The feedback loop is somewhat sluggish (manipulating a javascript-based joystick, sending http posts of the web), and the tuning of the motors is rough (often it will oversteer, over-torquing the steering column). So, there’s much room for improvement needed in the software.
One crux is that robotic motors and sensors are continuous, but the simplistic software API is discrete: The PiStorms unit works by sending a signal to the motors (run for 1 second, spin at 25 rpm, slow down to a stop in .1 seconds, etc). Driving requires iterative polling of the input (every x seconds, check the throttle, translate that to a motor command, send to the motor). Of course, motors in real cars don’t work this way – there’s a smooth continuous feedback between the throttle and the power given to the drive train. The question – how can this be represented in software?
I’d like to add in visual sensors and have the PiBot drive itself using some rudimentary computer vision algorithms. A similar issue arises – how often do you poll the sensor? 100 times a second? 500? Is it possible to act (ex: turn to avoid an obstacle) upon a single view, or is a continuous model of required? What kind of data structures and overall program architecture allow this orchestration between input sensors and output motors? These are some of the interesting questions that arise in robotics.
It’s been a fun project to complete the base model, but the truly fascinating road is the one that lies ahead.