Article I wrote for PerlTricks.com:
When used as a web app server, the Raspberry Pi often hosts a small number of static files that rarely change. Although the Raspberry Pi Model B(+) only has 512 MB of RAM, using 10MB for a ramdisk is usually more than enough.
Files will be copied by Apache at startup. If you make changes to these files, you’ll either need to copy them manually, or restart Apache.
Start by making the directory mount point. I used
# mkdir /var/www-ramdisk
Modern linux systems mount ramdisks through ‘tmpfs’, so add an entry to
tmpfs /var/www-ramdisk tmpfs nodev,nosuid,uid=[UID],gid=[GID],size=10M 0 0
[GID] with the respective uid and gid that Apache runs under for your system. On the default Raspbian install, this will be the www-data user and group.
mount -a and the ramdisk should appear (use
df to confirm).
Next comes the Apache config. Somewhere in the config, you’ll need a
/etc/apache2/mod_perl_post_config.pl, an write in:
# Copy files to the RAM disk
(system( 'cp -R /var/www/* /var/www-ramdisk' ) == 0)
or die "Could not copy files to ramdisk: $!\n";
This shells out during startup to recursively copy everything from the default Apache docroot to the ramdisk.
Now grep through your Apache config, and change all paths to
/var/www and replace it with
/var/www-ramdisk. Lastly, restart Apache with:
# /etc/init.d/apache2 restart
Check the files with
ls -l /var/www-ramdisk and you should see everything that’s in
Edit: Forgot to credit domoticz.com, where I got much of the fstab setup.
What does “int a, b, c” do in Perl? Lots of people want to say that this won’t even compile. There’s even a comment on the StackOverflow post accusing OP of posting uncompilable code.
It is compilable. Without
use strict, perl will accept damn near anything. What’s interesting here is that the immediate response by many people is that it’s invalid code. That was my initial reaction, and it took me a second to remind myself.
It’s interesting that we’re all so conditioned to
use strict that we forgot how Perl looks without it. This is probably a good thing.
One thing it lacked was a Perl interface to the I/O pins. I quickly fixed this oversight with Device::PCDuino.
Currently, GPIO and ADC pins are implemented. PWM should be easy to work out. SPI and I2C look like they’ll be a little more involved (see the Sparkfun examples in C).
Usage is about as simple as can be. Use
set_input() to set the type you want for a given pin, then call
input() to set or receive values, respectively. See the POD docs and the “examples/” directory in the distribution for more details.
UAV::Pilot::Video::Ffmpeg v0.2, UAV::Pilot, UAV::Pilot::WumpusRover v0.2, and UAV::Pilot::WumpusRover::Server v0.2
These modules just got some Yak Shaving done. Fixed up their CHANGELOG, spammed license headers on everything, and added a test for POD errors.
Fixed a regression bug (and added a test case) of FileDump taking a filehandle.
UAV::Pilot::Command will now call
uav_module_quit() on the implementation libraries for cleanup purposes.
Same Yak Shaving as above.
bin/ardrone_display_video.pl for processing the video data. This can either
be over STDIN, or in a raw filehandle. This is implemented in a way that should work for both
Unixy operating systems and Windows.
UAV::Pilot::ARDrone::Video::Fileno to support the above.
uav shell for ARDrone, there is a parameter you can pass to
start_video. Calling it without a parameter has the video stream handled in an external process with the
fileno() method, which keeps the stream latency down. Calling it with
1 will instead use a pipe, which has a small but often noticeable lag in the video output. Calling with
2 will use the old fashioned way, which does not open an external process. Using an external process tends to take advantage of multicore CPUs better.
Nav data now correctly parses roll/pitch/yaw. The SDL output was updated to accommodate the corrected values.
Same Yak Shaving as above.
Given that the AR.Drone’s control system sends roll/pitch/yaw parameters as floats between 1.0 and -1.0, I thought the navdata sent back things the same way. I was never getting the output I expected in the SDL nav window, though.
Then last night, I was comparing how NodeCopter parses the same nav packets and seeing completely different numbers. Turns out the AR.Drone sends back numbers in millidegrees instead.
I feel dumb for not noticing this before. OTOH, this is only “documented” in the AR.Drone source code. The actual docs tell you to look at the navdata in Wireshark and figure out the rest for yourself.
Corrections are now up on github. Should be a new release of UAV::Pilot distros coming up soon. These releases will cleanup quite a few details that I wanted to get done before YAPC, so we should be in good shape.
Hopefully, the TSA won’t bother me too much with an AR.Drone in my carry-on. Some people at my local hackerspace managed to get a whole Power Wheels Racer, including the battery, into their carry-on, so I think I’ll be good.
I’ve been running some crude benchmarks of the UAV::Pilot video timing. As I went over in my last post, I’m planning on having the video be read from the network in one process, and have it piped out to another process for decoding and display.
I added logging statements that show the exact time (using
Time::HiRes::gettimeofday()) that a video packet comes in, and then another log for when we display it on the SDL window.
The first benchmark used the existing
uav_video_display that’s in the UAV::Pilot distribution, reading from the file
ardrone_video_stream_dump.bin. This file is in the UAV::Pilot::ARDrone distribution and is a direct dump of the network stream from an AR.Drone’s h.264 video port. It’s primarily used to run some of the video parsing tests in that distro.
I found that on my laptop, there was a delay of 12.982ms between getting the video frame and actually displaying it. At 60fps, there is a delay of 16.667ms between each frame, so this seems quite acceptable. The AR.Drone only goes up to 30fps, anyway, but it’s nice to know we have some leeway for future UAVs.
I then implemented a new script in the UAV::Pilot::ARDrone distro that read the same video frames from STDIN. I had planned on doing this with the same file noted above, like this:
cat ardrone_video_stream_dump.bin | ardrone_display_video.pl
But this ended up displaying only the last frame of video.
My theory on why this happens is that we use AnyEvent for everything, including reading IO and telling SDL when to display a new window. Using
cat like that, there’s always more data for the
AnyEvent->io watcher to grab, so SDL never gets a chance until the pipe is out of data. At that point, it still has the last frame in memory, so that’s what it displays.
I tried playing around with
dd instead of
cat, but got the same results.
So I broke down and connected to the actual AR.Drone with
nc 192.168.1.1 5555 | ardrone_display_video.pl
Which did the trick. This does mean that the results are not directly comparable to each other. We can still run the numbers and make sure the delay remains insignificant, though.
And indeed it did. It averaged out to 13.025ms. That alleviates my concern that using a pipe would introduce a noticeable delay and things can go right ahead with this approach.
I’ve been going back into the various UAV::Pilot distros and trying to figure out how to best approach putting video and nav data together. Ideally, navigation would overlay information directly, with a standalone nav display perhaps being an option.
That doesn’t work, because the video uses a YUV overlay in SDL to spit all the pixels to screen at once. Because of whatever hardware magic SDL does to make this work, drawing on top of those pixels has a nasty flicker effect. SDL2 might solve this, but there hasn’t been much movement on the Perl bindings in a number of months.
Using OpenGL to use the YUV video data as a texture might also solve this, and I suspect it’s the way to go in the long term. Trouble is, Perl’s OpenGL docs are lacking. They seem to assume you already have a solid grounding in how to use OpenGL in C, and you just want to move over to Perl. I messed with OpenGL ES on Android (Java) a while back, but I’m more or less starting fresh. Still, working through an OpenGL book in C might be a good exercise, and then I can revisit this in Perl.
(If anybody else wants to take up the OpenGL stuff, I would wholeheartedly endorse it.)
It would be nice if SDL let you create two separate windows in the same process, but it doesn’t seem to like that.
The trick that’s already implemented in the full releases is to take an SDL window and subdivide it into different drawing areas. This meant implementing a half-assed layout system. It also ended up breaking in the last release, as I called
SDL::Video::update_rect() on the whole window, which caused nasty visual issues with the YUV overlay.
That part is fixed now by only updating parts of the layout that want to be updated. Now the problem is that displaying the nav and video together causes a half-second or so lag in the video. This is unacceptable in what should be a real-time output.
I think the way to go will be to
fork() off and display the video and nav in separate processes. The central process will manage all incoming data from the video and nav network sockets, and then pipe it to its children. Then there are separate SDL windows in each process. The
UAV::Pilot::SDL::Window interface (that half-assed layout system) will probably still be implemented, but will be effectively vestigial for now.
This might mean parsing the nav stream redundantly in both the master process and the nav process. There are still things in the master process that would need the nav data. But it’s probably not a big deal.
It’ll also mean all the video processing can be done on a separate CPU core, so that’s cool.
Another benefit: currently, closing the SDL window when using the
uav shell will exit the whole shell. There are probably some SDL parameters I could play with to fix this, but with separate processes, this is no longer a problem.