Robot 1: Getting Started
I’m on a quest to relearn robotics after a long hiatus, and as part of that I discovered two things: Russ Tedrakes manipulation course and Alexander Koch’s Low Cost Robot.
A lot of people are doing a lot of cool things with the low cost robot, but the primary way I’ve seen people use it is by teleoperating it and then training an ACT like model on that training data. This is awesome, and definitely something I want to do, but I was craving a more first principles approach to performing a task that didn’t involve just trying to tune a neural network to go from image to action.
On the other hand, the manipulation course is an incredible theoretical foundation to robotics, but when you do it at home without the expensive IIWA robots they use in the course, it can be hard to stay motivated when you’re just working in simulation. So my goal is to bridge the gap - how can I make the low cost robot work with the manipulation course chapters? That’s what I’m going to take you through over the course of this series.
A really important part of the manipulation course is the project. Robotics is hardly a purely academic endeavor - it felt important to have some end goal for what I’m building. My goal is to get this robot arm to plug in a USB C cable into my mac everytime I set it down on my desk. This feels complex enough to combine fairly advanced CV with really advanced dexterity (especially with just a 6 degree of freedom arm). Don’t worry if those words don’t mean anything to you, you’ll understand them a lot more after watching the first two lectures in the manipulation course.
I’ll also be exploring tangents and diving deep into math where I think it’s useful. Expect some tangents and explorations of things that are brushed over in the notes.
This post covers the first two lectures of the manipulation course. The code for this part is here, although I’d encourage you to try to recreate it from scratch to really understand what’s going on.
Setting up Drake and URDFs
The course notes have a series of accompanying Jupyter notebooks that are made to run in Deepnote. I personally dislike this, and would much rather run things locally where possible, especially since there isn’t anything compute intensive for the first few lectures. You can find the lecture notebooks here, but here’s a couple of tips for running it:
- If you’re running on an ARM mac make sure your MacOS version is >=14 since the version of Drake required does not exist for older versions (don’t ask me how long it took to figure this one out).
- Use
uv
for really speedy virtual environments. - If you open up one of the notebooks, none of the
manipulation
related code will load properly since Jupyter doesn’t respect the root level path you start the notebook from. An easy way around this is toexport PYTHONPATH=$(PYTHONPATH):$(pwd)/manipulation
before starting it up. Otherwise, you’ll have tocd
to the correct directory every time you start up this notebook.
Now, we want to get a really simple robot setup going. Let’s start by copying what we see in the lecture notebook, and getting the appropriate URDF from the robot repository. Once you download that and give it a try, you’ll see that it breaks when you try to load it into Drake because Drake does not support STLs. Luckily, we have random unvetted third party packages to come to the rescue - stl2obj. You can use this little snippet to get things converted quickly:
parallel 'python ../../low_cost_robot/.venv/lib/python3.12/site-packages/stl_to_obj/__main__.py "{}" "{}.obj"' ::: *.stl
Then, you have to go into the URDF and replace all the package://
directives with file://
directives, as well as all the .stl
files with .stl.obj
. A simple find and replace should fix both of these.
At this point, if you run the snippet again, you should see things start to work in the notebook, and be able to get and set joint information for the five available joints. Some interesting things to note here:
- You have to weld
base_link
to the world frame. - If you look really carefully (unlike me), you’ll notice that the robot in sim does not match the robot you built. It’s missing one whole part.
- All the “scenarios” given in the manipulation course don’t make sense to reproduce here since we don’t have a “driver” for our arm. The
!IiwaDriver
style directives are really important and tell the Python code to use an accurate model of the Iiwa in sim. We’re not modeling our arm to that degree of precision and are relying on anInverseDynamicsDriver
. You can define the driver in the YAML file, but I prefer to keep things in code for clarity in this learning phase.
At this point, I’d encourage you to play with the q0
values. Does the zero configuration of the sim match the zero configuration of the real robot?
A Tangent: Rotational Dynamics
I was very intrigued by the justification provided in the notes for why the joint controllers can be tuned independently for each joint because the motor moments dominate the joint moments. This got me thinking about the equation underlying rotational dynamics: Net torque = I * \alpha. Is this derivable from Newtonian linear dynamics?
The answer (which I discovered in this paper) is yes, but with some caveats. First, we have to modify Newton’s third law to state that “every action has an equal and opposite reaction, and the forces act along the line joining the masses”. The latter half of the statement is needed to conserve angular momentum since forces that aren’t collinear with the masses will cause a net torque.
The second, and more intriguing piece of the paper talks of how we can only derive rotational dynamics if we do away with a strong assumption of rigidity. This was really interesting to me, since the rigidity of bodies is something I had never considered. The authors point out that rigid bodies are impossible because of relativity, but more simply they are impossible when you try to derive rotational dynamic equations from linear dynamics.