Due to the number of inquiries I get from my YouTube videos of Gizmo, I wanted to post here some of the frequently asked questions.
I’m glad you’re studying to be an engineer. It can be an extremely fun, challenging, and rewarding career. However, it’s important that you learn something before you take another single derivative. Using Google to find someone to give you all the answers to your project really cripples the purpose of doing the project in the first place. Sure, someone else may have already done what you’re trying to do. Sure, I learned from others and you will too. However, I firmly believe the best way you can help out yourself and your education is to step in there and try something. The first try will likely be wrong, and that’s why we as engineers do design analysis before we cut metal. With the prevalence of great tools like Matlab and Simulink (usually with Student versions available at a huge discount), as well as the low cost of cheap robotics parts, do a little math, run a few simulations, and then build something. You’ll learn something from each step. When you find yourself completely stuck and have specific questions to ask about why your solutions might not be working, then hit up the vast knowledge of the internet.
Posing questions to me before you have done any of this doesn’t help either of us. I’ve responded to hundreds of questions about Gizmo, but I generally only respond to those that are asking questions based on the results of whatever they’ve already tried. Please don’t ask me to do your project for you, and please don’t ask me to just give you my code. The code is extremely simple, the core of it is only a handful of lines. It’s all that you’ll learn getting to those few lines that is the real reward for doing the project in the first place.
I was lucky enough to purchase them from a co-worker that had them on hand. They are 24V Pittman DC gear motors, 19.1:1 gearing ratio, with very high count encoders on the motors. The part number is GM9236S021. For the 6” wheels that I have, this allows the robot to “cruise” at about 1.3 m/s (intentionally close to human walking speed) and still have enough velocity to accelerate to overcome most disturbances. The actual speed itself isn’t important as long as the motors have enough torque to accelerate, and the peak speed is fast enough to overcome disturbances the robot might encounter.
Backlash in the gearbox IS extremely important. Cheap motors will have a lot of backlash. Backlash will directly impact how smoothly the robot balances and how far it will oscillate back and forth when standing still.
My general suggestions on motors are:
1) If you’re going to put your money anywhere, get good geared motors with high quality gearboxes.
2) Don’t drive directly off an output shaft that uses a bushing (or drive one with a tensioned belt).
3) Encoders, either on the motor, output shaft, or wheels are desirable, if not absolutely necessary.
I generally chose what I had on hand or could purchase locally at low cost.
Wheels – The wheels are model airplane wheels from a hobby shop. I chose these wheels because they are very lightweight and thus have very low inertia. They were also the right size. They have fine traction on hard surfaces or carpet, but poor traction on gravel or sand.
Hubs – The hubs were custom turned a lathe using a large bolt as a starting piece. The hubs are less than optimal – the set screws that hold them to the shafts tend to work themselves out over time as the robot balances. This could be improved by using two set screws at an angle to each other, but I’ve just never gotten around to doing this.
Gyroscope – Analog Devices ADXRS150 single axis gyroscope. These are available already soldered to a board from a number of suppliers around the net. I used a custom board at the time. These days, just about any gyro will work, and I would probably use a digital variety. The rate range needs to be above about 50 degrees per second, but if you use more than about 300 degrees per second you’ll likely start loosing resolution that will be necessary for proper state estimation.
Accelerometers – Analog devices ADXL202 dual axis analog accelerometer. These are also readily available, although these days digital versions are probably easier to use. It can be done with one axis, but it’s more robust with two.
Microcontroller – STR710 microcontroller development board. I had one, so it was free. I also have access to a rather expensive toolchain for this part, but there are free options out there. The choice isn’t that important, there’s no reason you can’t make it work on a moderate 8-bit micro-controller. If you have no bias, the STM32 series of processors is really nice and you can get a very cheap development board for <20USD from ST as well as free development tools and software.
This page attempts to share the design, but this page is also as far as I’ll go. I won’t share the actual code because it’s very straightforward and I feel like it’s creation is one of the more rewarding parts of the endeavor.
Yes, I have one somewhere. No, I won’t send it to you. There is a simple reason for this: it’s a classic design problem for engineering students, and you should take the time to go through the process yourself. However, if you’re not an engineering student, know that you don’t really need a complicated model to have a working robot. The control laws described below can be easily coded and tuned by hand through trial and error. The state space model is also readily available (see Google, or Ogata, Modern Control Engineering). I found Simulink to be the most helpful tool in the design process, and used it to validate the control laws I wanted to try before coding them.
I’m fairly certain this problem can be solved via just about any reasonable control law. I know it’s been solved using PID, fuzzy logic, LRQ, State Space, analog controllers and likely many more. I used basic linear PID controllers and tuned them by hand.
I use two cascaded loops, one for velocity control, the output of which is a commanded angle which becomes the reference input of the angle control. Note that this is possible without a direct-coupling or “feedforward” term because the commanded angle is zero for a steady state velocity. Non-zero angles will always result in vehicle acceleration.
The inner loop uses as it’s feedback term the angle estimate and controls via a Proportional-Derivative controller. Since the angle derivative is the output of the gyroscope, we shouldn’t have noise issues related to taking a simple numerical derivative. If you’re starting out, do this part as soon as you have a working state estimator. You should be able to get this working and basically balancing without any other controls and the vehicle should basically balance, although it will probably wander with a steady state error and may slowly accelerate smoothly until it’s going to fast to fall over. This happens because the state estimate isn’t perfect and may not register the slow acceleration relative to the very small gyroscope rates.
The outer loop control is a Proportional-Integral controller which outputs a commanded angle into the inner loop controller. The velocity command comes from the (wireless) control link via a joystick. The integral term compensates for trying to balance at the wrong setpoint, since balancing at a non-zero angle will result in a constant acceleration and increasing velocity. In this way the robot can handle changes in the balance point due to loads.
There is also some dead-band compensation in the PWM controller which helps to reduce the non-linearity in the PWM due to shaft and rolling friction. This is required since there is some small PWM value which should be the minimum to turn the wheels under load, and that value will very likely be non-zero. I don’t have any backlash compensation, but I have good motors with low gearbox slop.
The orientation of the robot is sensed through the use of a single axis Analog Devices (ADI) analog gyroscope (ADXRS150) and a dual axis ADI analog accelerometer. None of these sensors alone is sufficient to measure the tilt angle of the robot.
However, we can use the fact that the accelrometers, over the long term, can be used to estimate the average tilt (plus the high-frequency platform acceleration), while the gyroscope will measure the instantaneous tilt rate of the robot. Since the gyroscope isn’t perfect and has some amount of bias (it reads a value even when sitting still), simply integrating the gyroscope will result in an estimation of tilt that will “drift” over time.
If we use the arc-tangent function to derive an angle from the X (forward) and Z(down) accelerometers, we will have one estimate of our tilt angle. However, while this angle will be correct when the vehicle is still (or more precisely, not accelerating), it will be corrupted when the vehicle is accelerating (e.g the motors drive the wheels causing the robot to move). However, we can use this as one measurement of the vehicle tilt, one that we know may be inaccurate in the short term (meaning we can’t trust any one measurement).
Gyroscopes, as good as they have become recently, are also not perfect. When sitting perfectly still, they may measure a value which is non-zero. This is called bias, and is due to effects within the device as well as environmental effects such as temperature and (in poor gyroscopes) vibration. If we integrate the gyroscope measurement, we will integrate the bias along with the sensor measurement. This means that the integrated gyro value will drift over time. This can be seen by integrating a gyroscope which is stationary and observing the integrated value over a few minutes. The value will not stay at zero, but will instead have random angular walk plus constant drift due to bias.
The solution I use is a Kalman filter to optimally combine the accelerometer’s measurement of angle with the gyroscope’s measurement of angle. The nice thing about the Kalman filter is that in the process of crunching the numbers, one output is a measurement of the gyro bias, and this can be measured along with the robot’s angle.
However, a Kalman filter is actually overkill for this problem. The reason is that a Kalman filter actually changes how much it ‘trusts’ the accelerometers versus how much it ‘trusts’ the gyroscope over time. However, we don’t really need that ‘trust index’ to change over time in order to get a usable measurement of angle. The simple answer is a complimentary filter. Instead of explaining that here, check out this paper that does a nice job of explaining it.
A co-worker brought in his Segway yesterday, so I had to take the opportunity for a photoshoot with my balancing robot Gizmo and his bigger cousin.Failed to connect to flickr.com!
I took my inverted pendulum balancing robot, added a camera and wireless transmitter, and let everyone around the office take a spin. The event was not without mishap, but it’s a lot of fun to watch people interact with the robot, and not so much fun when they try to block it’s view, or just don’t get out of it’s way. Or they kick volleyballs at it. That’s just mean.
I added feedback from the motor encoders this week, which allows the robot to know that it is moving. Previously it only knew it was tilted, so it wanted to run away if it was only tilted slightly. I also wired up the batteries, which were salvaged from a less-than-optimal dell laptop battery. I’ve still got some work to do, but the velocity feedback is now countering it’s tendency to run away. The center of gravity is still really low (I’m waiting for mounts to get made to move the batteries up to the top), so it can still outrun itself really easily. Next on the list is to add control from the PC via the bluetooth module that is currently attached.
This is the beginnings of a robot I’ve been working on. Katie named it Gizmo. Eventually, it will be a two-wheeled balancer and stand up like this on its own. The picture is a little deceiving, the wheelbase is about 14" and it stands 24" high. The electronics for this are almost complete, it will be gyroscope stabilized like a Segway and eventually roam around on its own avoiding obstacles. This hardware is a quick hack so I can get started on the software, at some point the hose-clamp motor mounts will be replaced by something more permanent.Failed to connect to flickr.com!
Lemonodor took the info from this Carnegie Mellon page and turned it into a Google Earth KML file. If you save that page as a .kml file, and open it with Google Earth, you’ll get a nice track of the course route.
Lemonodor doesn’t claim this is the official track, and personally I hope its not—it is way too fine with respect to detail. If this is the course the ‘bots ran, and they had waypoints this close together, I’m amazed that more didn’t finish. Without too much AI, and with much more GPS-connect-the-dots, this course is very doable. My feeling is that the actual course is closer to that seen on the CMU website, which has points separated by a bit, with hard-core navigation between the points necessary by good AI. Maybe someone will release the official route soon.
Stanford’s autonomous robotic vehicle ‘Stanley’ took home the $2-million first prize in the DARPA Grand Challenge. Traversing the 131.6 mile course in just under 7 hours, Stanley beat out both teams from Carnegie Mellon University, H1lander and Sandstorm, by over 10 minutes. only 15 minutes behind the lead pack at 7.5 hours was the Gray Team’s KAT-5, so named because the team’s home of Metarie, LA, just outside of New Orleans, was damaged by hurricane Katrina. The Oshkosh Truck Company’s entry TerraMax was allowed to complete the course this morning, and was the only finisher that crossed the line beyond the 10-hour time limit, at almost 13-hours moving time, and over 26 hours on the course (the ‘bot was paused a number of times because of course obstuctions, and spent the night paused on the course).
With a great deal of respect for the teams that competed, I will have to say that while I am excited that 5 teams were able to complete the course, it seems like the course was a good deal less complicated than last year. There were a number of obstructions, including tunnels, high-tension power line towers, and a precarious mountain pass, but a great majority of the time the vehicles were on ground that a compact sedan could traverse. I think that DARPA did a good job of making certain that at least a few of the teams would finish, after last year’s PR nightmare when the best contender only travelled 7.5 miles. But, I have to ask, was this really a Grand Challenge, or was it more of a task tailored to the capabilites of the vehicles competing? Regardless of the answer, it is poingant to note that DARPA managed to get tens of millions of dollars of research and development from a few million investment by putting on the Grand Challenge. And, knowing DARPA’s history of return on investments (their motto is to go after high-risk, high payoff research), this may have been their best turn-around yet.