I've been going back into the various UAV::Pilot distros and trying to figure out how to best approach putting video and nav data together. Ideally, navigation would overlay information directly, with a standalone nav display perhaps being an option.
That doesn't work, because the video uses a YUV overlay in SDL to spit all the pixels to screen at once. Because of whatever hardware magic SDL does to make this work, drawing on top of those pixels has a nasty flicker effect. SDL2 might solve this, but there hasn't been much movement on the Perl bindings in a number of months.
Using OpenGL to use the YUV video data as a texture might also solve this, and I suspect it's the way to go in the long term. Trouble is, Perl's OpenGL docs are lacking. They seem to assume you already have a solid grounding in how to use OpenGL in C, and you just want to move over to Perl. I messed with OpenGL ES on Android (Java) a while back, but I'm more or less starting fresh. Still, working through an OpenGL book in C might be a good exercise, and then I can revisit this in Perl.
(If anybody else wants to take up the OpenGL stuff, I would wholeheartedly endorse it.)
It would be nice if SDL let you create two separate windows in the same process, but it doesn't seem to like that.
The trick that's already implemented in the full releases is to take an SDL window and subdivide it into different drawing areas. This meant implementing a half-assed layout system. It also ended up breaking in the last release, as I called `SDL::Video::update_rect()` on the whole window, which caused nasty visual issues with the YUV overlay.
That part is fixed now by only updating parts of the layout that want to be updated. Now the problem is that displaying the nav and video together causes a half-second or so lag in the video. This is unacceptable in what should be a real-time output.
I think the way to go will be to `fork()` off and display the video and nav in separate processes. The central process will manage all incoming data from the video and nav network sockets, and then pipe it to its children. Then there are separate SDL windows in each process. The `UAV::Pilot::SDL::Window` interface (that half-assed layout system) will probably still be implemented, but will be effectively vestigial for now.
This might mean parsing the nav stream redundantly in both the master process and the nav process. There are still things in the master process that would need the nav data. But it's probably not a big deal.
It'll also mean all the video processing can be done on a separate CPU core, so that's cool.
Another benefit: currently, closing the SDL window when using the `uav` shell will exit the whole shell. There are probably some SDL parameters I could play with to fix this, but with separate processes, this is no longer a problem.