Robot 4: Inverse Kinematics
Having set up a good teleop setup, the robot now needs a way to go between different end-effector positions. The manipulation course sets this up as a Jacobian IK problem (if those words mean nothing to you - they explain it better than I can). The goal today is to be able to specify a gripper pose, and watch as our robot makes it’s way over to that position.
I watched the lecture corresponding to Jacobians a couple months back, and when diving back into the subject, my first instinct was — why can’t we just do this without operating in velocity space? It certainly adds a lot more complication to the problem. My intuitive sense is that we only know how to solve linear problems, and doing IK in velocity space makes our problem linear. This is evident from the code of the Jacobian - we compute the Jacobian at the current joint positions, and then use it’s inverse to nudge our gripper along a velocity that will bring it closer to a (linearly-interpolated) path to get to it’s goal. If we wanted to solve the IK problem purely in joint space, we have to deal with the mess of sin
and cos
everywhere, and it seems potentially harder to get feedback.
Story Time
I actually did work on the arm in brief spurts in between, and learnt some interesting lessons which took me down some fascinating tangents.
The 3D printed robot ended up breaking because the bottom part snapped off after a couple of bad commands. I tried printing the new part at HackManhattan, but the quality of the prints were just not good enough to use on the robot. My solution to this was to.. buy a 3D printer - so I procured a BambuLabs A1 with AMS. I’m happy to report that I have no regrets purchasing this marvel of engineering. Bambu has taken the notion of feedback and calibration to an absurd degree - the way the machine just works in so many scenarios is incredible. I’ve been 3D printing constantly (apart from the week that I broke my printer), but that’s a story for another article.
Anyway, I 3D printed the base part again and stuck it back to the robot. It was at this point I encountered my worst enemy - stripped screw heads. The Philips head screws that come as standard with these Servo’s are quite terrible for repairability - it’s so easy to strip the head off by just unscrewing it a few times. Having spent over an hour trying to get some errant screws out with pliers, I decided to embrace the good life and ordered some M2 countersunk screws. This purchase was a close second behind the 3D printer for the quality of life improvement - no longer was I scared of assembling my robot.
Once I had the whole robot set up, I connected all the motors again and instantly put the robot in a terrible configuration that almost broke my newly printed part. I tried again, carefully, and the same thing happened?! I usually love a reproducible bug, but in this case every reproduction was pushing my motors closer to their breaking point. Trying one last time to see what the problem was, I realized that every time I sent a command to motor 0, the same position command was being sent to motor 1! This explained so many catastrophic behaviors I saw in the past. I didn’t end up getting to the bottom of why this happened, but resetting motor 0 to be motor 9 fixed my problems.
Slow Down and Smell the Fumes
One important problem we ran into last time was fixing jerkiness. The robot moved from point to point so violently that it was impossible to stop it before it did something bad. We hacked around this by writing our own safe_move_to_pos
routine that slowly moved the robot between two points, but I realized there had to be a better way. Playing around the Dynamixel UI I saw “Goal Velocity”, and with some subsequent digging discovered that this was only useful in velocity control mode. This was the impetus I needed to RTFM though, and I discovered a beautifully written section telling me exactly how I could control the velocity and acceleration of my motors.
I updated robot.py
to take these profiles into account, and removed the low-pass filter (LPF) that was put in front of my inputs for smoothing, and the robot now had silky smooth motions between points.
As an aside I reminded myself what LPFs do - they make the time response of the output approach the target exponentially.
Can You Point Me in the Right Direction?
Ok, with all that setup, we’re finally ready to start some work on IK. As part of this, we’re switching away from Jupyter notebooks to a single Python script - I find this is more reliable and let’s me have the full power of my editor (and copilot).
I wanted to run the Jacobian controller, but without hardcoding the gripper velocity like this notebook does. Part of that was understanding how to set some inputs to constant values. The best Drake abstraction I learnt from that notebook was context = simulator.get_mutable_context()
. The question I kept running into was - how do I get the right context? And this gave me the answer. Another really belated discovery I made is that the upper case GetInputPort
allows passing in the name of the port rather than keeping track of indexes everywhere. When you ask the arm to go at a constant velocity, it looks a bit like a child throwing a tantrum.
To actually find start and goal poses, I wrote up an initial_cond_viz
utility (sidenote: click
is awesome) to play with some initial condition joint positions, and see what the resulting end-effector positions are. I used pretty conservative start and end positions, and things just worked (in simulation of course)! We now have the robot moving between two pre-defined points. I tried this on the real robot but my joint safety limits came in the way - so I promptly removed them, who needs to be safe? Besides, the trapezoidal velocity profile I set should make the robot move slowly enough that I can use my e-stop (pull the power cable out) quickly enough.
Next Steps
I have a bunch of nits I hate about my current code:
- Make sim and real use the same safety logic for joint limits.
- Make sim and real use the same procedure to calibrate to a “home” position from where they are currently.
- Calibrate the real and sim robot to be the same. There are some joints that are out of alignment, and the logic to correct this is currently split between the Dynamixel UI and the URDF/ Onshape definition.
- Make sure the joint limits set in the URDF are correct, this is related to the step above.
- Connect the Jacobian controller directly to the velocity control of the Servo and bypass the integration step.
- I noticed that reading the robot position seems to slow down the overall control, so I would like to move reading the robot position to a queue, and have some safety checks (like stop the motion if reading the position is behind more than a certain amount).
In the interest of making progress though, I’m going to move on to more interesting things: my super short term goal is to use my smartphone to be able to teleoperate the robot by moving it around in space. The next step to being able to do this is solving two problems:
- Make the robot move to an arbitrary EE position without having to feed it the “start” position. I’m not sure if this is really necessary even today, but it seems like the trajectory is pre-computed from the waypoints, rather than a dynamically generated trajectory that lets you move around the prescribed location.
- Make the robot motion less jerky and potentially move to higher level planning goals rather than lower level dynamics goals.
The reason I want to move towards teleoperation is to train a neural net on the robot. I’m jumping the gun on a lot of the classical robotics the manipulation course goes through, but I would like to start making progress towards my real-world task of putting plugging a USB-C cable into my laptop, and this feels like the best way.