Low Latency FPV Streaming with the Raspberry Pi

One of the biggest challenges in running my local quadcopter racing group has been overlapping video channels. The manual for the transmitters list a few dozen channels, but in practice, only four or five can be used at the same time without interference.

The transmitters being used right now are analog. Digital video streams could make much more efficient use of the spectrum, but this can introduce latency. Of late, I’ve been noticing a few posts around /r/raspberry_pi about how to do an FPV stream with an RPi, and I’ve been doing some experiments along these lines, so I thought it was a good time to share my progress.

It’s tempting to jump right into HD resolutions. Forget about it; it’s too many pixels. Fortunately, since we’re comparing to analog FPV gear, we don’t need that many pixels to be competitive. The Fatshark Dominator V3s are only 800×600, and that’s a $600 set of goggles.

You’ll want to disable wireless power management. This will tend to take the wireless interface up and down a lot, introducing a delay each time. It’s not saving you that much power; consider that an RPi takes maybe 5 watts, while on a 250-sized quadcopter, the motors can easily take 100 watts or more each. So shut that off by adding this anywhere in /etc/network/interfaces:

And reboot. That should take care of that. Check the output of iwconfig to be sure. You should see a line that says “Power Management:off”.

You’ll want to install GStreamer 1.0 with the rpicamsrc plugin. This lets you take the images directly off the RPi camera module, without having to use shell pipes to have raspivid to go into GStreamer, which would introduce extra lag.

With GStreamer and its myriads of plugins installed, you can start this up on the machine that will show the video:

This will listen on UDP port 5000, waiting for an RTSP h.264 stream to come in, and then automatically display it by whatever means works for your system.

Now start this on the RPi:

Modify that last line to have the IP address of the machine that’s set to display the stream. This starts grabbing 640×480 frames off the camera with h.264 encoding, wraps them up in the RTSP protocol, and sends them out.

On a wireless network with decent signal and OK ping times (80ms average over 100 pings), I measured about 100ms of video lag. I measured that by displaying a stop watch on my screen, and then pointing the camera at that and taking a screenshot:

rpi_cam_latency

This was using a RPi Model 2, decoding on a fairly modest AMD A8-6410 laptop.

I’d like to tweak that down to the 50-75ms range. If you’re willing to drop some security, you could probably bring down the lag a bit by using an open WiFi network.

I’ll be putting together some estimates of bandwidth usage in another post, but suffice it to say, a 640×480@30fps stream comes in under 2Mbps with decent quality. There will be some overhead on that for things like frame and protocol headers, but that suggests a 54Mbps wireless connection will take over 10 people no problem, and that’s on just one WiFi channel.

Newwwwwww QuadCopter!

My AeroQuad kit came yesterday. It’s a Cyclone frame with an AeroQuad32 control board. Spent most of the day at The Bodgery building it, and got most of the way through the frame build:

Half-finished AeroQuad

The plan is to put a Raspberry Pi on board that can be used for control over WiFi using UAV::Pilot, and also 1080p video.

Along those lines, I’m going back to the old WumpusRover code and getting the video streaming up to snuff. It’s ported to GStreamer1 now (instead of the original GStreamer CPAN module, which compiled against the old gst 0.10). One major item is that on a new client connection, wait for an h.264 keyframe to come down the wire before sending the client anything. There’s flags on the GstBuffer object that can do that, but they’re implemented via C macros that cause some trouble to the Glib introspection-based bindings. I’ll have to write some xs code into GStreamer1 to help support it.

(I feel like a bit of an idiot having my name on the GStreamer1 module on CPAN. With the Glib introspection bindings doing a whole lot of magic, I really have no idea how it works. I was just the one who stepped up to the plate to get it done. If someone came to me directly asking for help, I’d probably have no clue.)

Building OpenTX on Gentoo

I just got a Turnigy 9XR radio for a new quadcopter. I had been thinking about the Parrot Beebop, but I decided that I wanted a grown-up quad, so I got the AeroQuad Cyclone kit instead.

Now, the reason I bought the Turnigy 9XR is that it has an ATmega on board with fully customizable firmware. It even has the standard Atmel ISP port for programming.

I setup my laptop to build one of the major FOSS firmwares out there, OpenTX. Building the firmware itself went fine. The tricky part was building Companion, which is basically a big GUI wrapper around avrdude for burning the firmware. I hit an error that I couldn’t quite figure out:

This turned out to be because my Gentoo system was setup with python3.3 to be hit by default, and the code above is still on python2.7. I still had a build of 2.7 available, though, so it was an easy matter of switching:

Then follow the rest of the Companion build instructions, and switch it back to python3.3 when you’re done.

Bunch of UAV::Pilot Updates on CPAN

UAV::Pilot::Video::Ffmpeg v0.2, UAV::Pilot, UAV::Pilot::WumpusRover v0.2, and UAV::Pilot::WumpusRover::Server v0.2

These modules just got some Yak Shaving done. Fixed up their CHANGELOG, spammed license headers on everything, and added a test for POD errors.

UAV::Pilot v0.10

Fixed a regression bug (and added a test case) of FileDump taking a filehandle.

UAV::Pilot::Command will now call uav_module_quit() on the implementation libraries for cleanup purposes.

Same Yak Shaving as above.

UAV::Pilot::ARDrone v0.2

Added bin/ardrone_display_video.pl for processing the video data. This can either
be over STDIN, or in a raw filehandle. This is implemented in a way that should work for both
Unixy operating systems and Windows.

Added UAV::Pilot::ARDrone::Video::Stream and UAV::Pilot::ARDrone::Video::Fileno to support the above.

In the uav shell for ARDrone, there is a parameter you can pass to start_video. Calling it without a parameter has the video stream handled in an external process with the fileno() method, which keeps the stream latency down. Calling it with 1 will instead use a pipe, which has a small but often noticeable lag in the video output. Calling with 2 will use the old fashioned way, which does not open an external process. Using an external process tends to take advantage of multicore CPUs better.

Nav data now correctly parses roll/pitch/yaw. The SDL output was updated to accommodate the corrected values.

Same Yak Shaving as above.

Well, I Feel Stupid

Given that the AR.Drone’s control system sends roll/pitch/yaw parameters as floats between 1.0 and -1.0, I thought the navdata sent back things the same way. I was never getting the output I expected in the SDL nav window, though.

Then last night, I was comparing how NodeCopter parses the same nav packets and seeing completely different numbers. Turns out the AR.Drone sends back numbers in millidegrees instead.

I feel dumb for not noticing this before. OTOH, this is only “documented” in the AR.Drone source code. The actual docs tell you to look at the navdata in Wireshark and figure out the rest for yourself.

Corrections are now up on github. Should be a new release of UAV::Pilot distros coming up soon. These releases will cleanup quite a few details that I wanted to get done before YAPC, so we should be in good shape.

Hopefully, the TSA won’t bother me too much with an AR.Drone in my carry-on. Some people at my local hackerspace managed to get a whole Power Wheels Racer, including the battery, into their carry-on, so I think I’ll be good.

The Performance of Seperating Control and Video Display Processes in UAV::Pilot

I’ve been running some crude benchmarks of the UAV::Pilot video timing. As I went over in my last post, I’m planning on having the video be read from the network in one process, and have it piped out to another process for decoding and display.

I added logging statements that show the exact time (using Time::HiRes::gettimeofday()) that a video packet comes in, and then another log for when we display it on the SDL window.

The first benchmark used the existing uav_video_display that’s in the UAV::Pilot distribution, reading from the file ardrone_video_stream_dump.bin. This file is in the UAV::Pilot::ARDrone distribution and is a direct dump of the network stream from an AR.Drone’s h.264 video port. It’s primarily used to run some of the video parsing tests in that distro.

I found that on my laptop, there was a delay of 12.982ms between getting the video frame and actually displaying it. At 60fps, there is a delay of 16.667ms between each frame, so this seems quite acceptable. The AR.Drone only goes up to 30fps, anyway, but it’s nice to know we have some leeway for future UAVs.

I then implemented a new script in the UAV::Pilot::ARDrone distro that read the same video frames from STDIN. I had planned on doing this with the same file noted above, like this:

But this ended up displaying only the last frame of video.

My theory on why this happens is that we use AnyEvent for everything, including reading IO and telling SDL when to display a new window. Using cat like that, there’s always more data for the AnyEvent->io watcher to grab, so SDL never gets a chance until the pipe is out of data. At that point, it still has the last frame in memory, so that’s what it displays.

I tried playing around with dd instead of cat, but got the same results.

So I broke down and connected to the actual AR.Drone with nc:

Which did the trick. This does mean that the results are not directly comparable to each other. We can still run the numbers and make sure the delay remains insignificant, though.

And indeed it did. It averaged out to 13.025ms. That alleviates my concern that using a pipe would introduce a noticeable delay and things can go right ahead with this approach.

Thinking out Loud: Managing Video and Nav Display Together in UAV::Pilot

I’ve been going back into the various UAV::Pilot distros and trying to figure out how to best approach putting video and nav data together. Ideally, navigation would overlay information directly, with a standalone nav display perhaps being an option.

That doesn’t work, because the video uses a YUV overlay in SDL to spit all the pixels to screen at once. Because of whatever hardware magic SDL does to make this work, drawing on top of those pixels has a nasty flicker effect. SDL2 might solve this, but there hasn’t been much movement on the Perl bindings in a number of months.

Using OpenGL to use the YUV video data as a texture might also solve this, and I suspect it’s the way to go in the long term. Trouble is, Perl’s OpenGL docs are lacking. They seem to assume you already have a solid grounding in how to use OpenGL in C, and you just want to move over to Perl. I messed with OpenGL ES on Android (Java) a while back, but I’m more or less starting fresh. Still, working through an OpenGL book in C might be a good exercise, and then I can revisit this in Perl.

(If anybody else wants to take up the OpenGL stuff, I would wholeheartedly endorse it.)

It would be nice if SDL let you create two separate windows in the same process, but it doesn’t seem to like that.

The trick that’s already implemented in the full releases is to take an SDL window and subdivide it into different drawing areas. This meant implementing a half-assed layout system. It also ended up breaking in the last release, as I called SDL::Video::update_rect() on the whole window, which caused nasty visual issues with the YUV overlay.

That part is fixed now by only updating parts of the layout that want to be updated. Now the problem is that displaying the nav and video together causes a half-second or so lag in the video. This is unacceptable in what should be a real-time output.

I think the way to go will be to fork() off and display the video and nav in separate processes. The central process will manage all incoming data from the video and nav network sockets, and then pipe it to its children. Then there are separate SDL windows in each process. The UAV::Pilot::SDL::Window interface (that half-assed layout system) will probably still be implemented, but will be effectively vestigial for now.

This might mean parsing the nav stream redundantly in both the master process and the nav process. There are still things in the master process that would need the nav data. But it’s probably not a big deal.

It’ll also mean all the video processing can be done on a separate CPU core, so that’s cool.

Another benefit: currently, closing the SDL window when using the uav shell will exit the whole shell. There are probably some SDL parameters I could play with to fix this, but with separate processes, this is no longer a problem.

Announcing: The Great UAV::Pilot Split

The main UAV::Pilot distro was getting too big. The WumpusRover server had already been spun off into its own distribution, and now the same is happening for many other portions of the system.

Reasons:

  • UAV::Pilot hasn’t been passing CPAN smoke testing because of the ffmpeg dependency. Splitting that off means the main distro can still be tested.
  • There’s no reason to get the WumpusRover stuff if you just want to play with the AR.Drone
  • General aim towards making UAV::Pilot into the DBI of Drones

UAV::Pilot itself will contain all the basic roles used to build UAV clients and servers, as well as what video handling we can do in pure Perl.

UAV::Pilot::Video::Ffmpeg will have the non-pure Perl video decoding.

UAV::Pilot::SDL will take the current SDL handling, such as for joysticks and video display.

UAV::Pilot::ARDrone and UAV::Pilot::WumpusRover will take the client end of things for thier respective drones.

I expect the first release to have interdependency issues. I’ve tested things out on a clean install, but I still might have missed something. I’m going to be watching the CPAN smoke test reports closely and filling in fixes as they come.

UAV::Pilot v0.8 Released — Now Supports WumpusRover

At long last, UAV::Pilot v0.8 has been released. This is a big update with lots of API improvements. Most of those improvements were decoupling the code to support my own WumpusRover in addition to the Parrot AR.Drone. That means a big goal has been reached, where UAV::Pilot can support multiple types of automated vehicles. It also means UAV::Pilot is a major component of the code running on board a UAV, in addition to running the client side.

The WumpusRover is a project I’ve been working on for a while. It’s an old RC car I had laying around, retrofitted with a brushless motor controller, an Arduino, and a Raspberry Pi.

Update: Video of the WumpusRover will be up at http://youtu.be/yeDwb-6ASxw. Going to be leaving soon out of town, but wanted to make sure this gets up before I go out the door.

UAV::Pilot runs on the Raspberry Pi as a server, taking packets from the client and passing turning and throttle data to the Arduino. The reason for the split between Rapsberry Pi and Arduino is:

  1. The Arduino has better support for the communication pulses used by RC servos and ESCs
  2. The Raspberry Pi with Linux is not a real-time OS–that means a carefully timed signal to the servo could be interrupted by the OS
  3. The Raspberry Pi can run Perl, support any WiFi adaptor that Linux does, and has an excelent camera module

(The camera module is not yet implemented directly with Perl support. This is one of my upcoming projects, which will give the WumpusRover a video stream, among other things.)

As you can see, the two complement each other’s strengths. In the future, we might see cheap boards that can combine these two uses; the recently announced Arduino Tre looks promising.

I’ll be posting more detailed instructions later. If you’d like to get started on your own rover, you can start with the Arduino code here:

https://github.com/frezik/wumpus-rover

There are also some improvements to the older AR.Drone code. While making changes to the joystick API to support the WumpusRover, I found a case in the AR.Drone where its navdata sends a floating point -NaN. (More evidence that they shouldn’t have been using floating point for this purpose, and that the AR.Drone was badly implemented in general). UAV::Pilot was crashing in this case, but now handles it gracefully.

Also, the nav data will now be sent over ol’ fashioned unicast IP by default instead of multicast. This should make Mac users happy, as multicast isn’t setup out of the box on OSX.