Perl Advocacy Fail

Guy comes by Perlmonks wondering why his Perl program is so slow to start on a Raspberry Pi. Muses that Perl may be inappropriate for small platforms like this, and that perhaps the program should be rewritten in C. Monks get salty at the thought.

So great, now that guy probably won’t be back.

Now, the program in question was I/O bound, particularly on the Astro::Sunrise module. The initial thought of many Monks was that rewriting in C would not help, but that’s not obvious. Loading the modules involved here is a big task that would be built into a single binary for a C program, plus maybe some shared libraries that will likely be loaded up anyway at boot time.

Even so, there are better ways of helping here. The program uses threads and Switch, which are both probably unnecessary. Using threads in particular is a big performance suck.

I also double-checked, and the default perl on Raspbian actually is compiled with threads. I’m sure that’s because the base Debian distro has to be compatible with any Perl script you throw at, but that’s a big, unnecessary performance suck for a little Rpi. I’ll have to check, but Hiveberry might have a more sensible compile of perl. It’s a more up-to-date 5.20, as well (Raspbian comes with 5.14). That could make for a nice performance boost.

Coding for 80 characters per line — it’s not just for old farts anymore

In a discussion on /r/coding, people once again debated the merits of the old 80 character per line rule. My usual argument is that we want to put several code windows next to each other, so yes, we do want to limit things to 80 characters. The author of the linked piece mentions this, but I don’t think he makes a persuasive counterargument.

I might have changed my mind as 4K monitors become standard, but then someone in the discussion linked this:

http://www.pearsonified.com/2012/01/characters-per-line.php

This suggests that you should limit things to 50-100 characters per line for typographical reasons. Now, they’re mostly talking about prose writing there rather than code, but lacking studies otherwise, setting the limit to 80-100 seems sensible no matter how big monitors get.

GStreamer1 and Device::WebIO::RaspberryPi

Previous versions of Device::WebIO::RaspberryPi grabbed still images from the camera by calling out to raspistill. Given the limitations of the Rpi, this meant it had to load a program off the SD card into main memory and execute.

Meanwhile, the GStreamer framework has a plugin to read from the Rpi camera on its own. Problem was, the existing GStreamer module on CPAN was compiled against the deprecated 0.10 API, and rpicamsrc wouldn’t work against it.

I ended up asking around about 1.0 API bindings on the gtk2-perl list, and they were very patient in walking me through how to create them using Glib::Object::Introspection. Creating the bindings themselves was easy; hard part was figuring out all the magic it did behind the scenes to link to the C libraries and build Perl classes out of them.

After getting all that worked out, I released GStreamer1 on CPAN (the version number in the module name follows convention from Gtk2). Short on its heels, Device::WebIO::RaspberryPi 0.006 was released, which uses GStreamer1 to grab camera data for still images.

This greatly improves the wait time for grabbing an image in Device::WebIO::RaspberryPi. It also neatly solves a problem I’ve been struggling with since building the WumpusRover, which is that it was hard to reliably get images off the Rpi camera via Perl. With better Gst bindings, I think this is finally nailed down.

Considering the Security of SSL Client Certs Versus HTTP Basic Auth

People often overlook this option, but SSL allows clients to have their own certificates for authentication. It’s similar to SSH key authentication, except because it’s SSL, it’s mind-numbingly complicated to setup. For optimal results, you’ll want to have one client cert for each desktop, laptop, tablet, etc. that you want to connect to the site.

Tablets and smartphones are particularly tricky, because they can be stolen so easily. If you have a client cert loaded, my Galaxy S5 forces you to use at least a pin code for unlocking the phone. Sensible, but also more awkward to use.

Given that I was working on an HTTPS site, I wondered how the security of a long, random password (using basic HTTP auth) would be compared to SSL.

Basic auth is transfered in plaintext. The HTTP protocol does support digest encryption for plaintext connections, but that’s unnecessary for SSL.

On the server side, basic auth passwords can be stored in encrypted form. Apache’s default htpasswd uses either MD5 or crypt(), neither of which is adequate.

What about the security of the authentication handshake? Consider that SSL initiates connections with public key crypto, but for performance reasons, it uses that connection to transmit a newly-created, random block cipher key. The server and client negotiate for the specific block cipher, but it’s probably going to be AES128, or maybe something else of around 128 bits.

Therefore, transmitting a password with 128 bits of entropy will be just as secure as AES128. That is, if the password were stronger than this, then an attacker would have an easier time attacking the block cipher rather than the password.

So what do you need to get to 128 bits of password entropy? It’s a function of how many characters are allowed in the password, and its total length. Since we’re talking about characters that can be typed on a keyboard (whichever kind is standard in your country–US for me), we aren’t using the complete space of an 8-bit byte. So we need to get out some math:

Where H is the bits of entropy, L is the password length, and N is the number of characters that you are allowing in your password.

Here’s 90 characters that can be typed out on a US-standard keyboard:

Run that through the formula, and you find that a 20 character password will get you about 130 bits of entropy–more than AES128. If you’re considering against AES256, then 40 characters will go to 260 bits.

Given that, I wonder if it’s even worth it to use SSL client auth over HTTPS. Apache’s password storage needs modernizing, but that can be handled with server modules.

Device::WebIO Family Version 0.002

A little while ago, I made a tentative release of Device::WebIO. That was a basic version so I didn’t talk much about it. I’ve now finished up what I wanted for v0.002, which gets things to where I want.

Device::WebIO provides a unified API for controlling all the wonderful System on a Chip hardware that’s been coming out of late. If you wanted to control one of these devices on Perl previously, you would have used HiPi for Raspberry Pi, Firmata for Arduino, or Device::PCDuino for PCDuino. Each of these has their own API. With Device::WebIO, you use a single interface with a driver for your specific device.

The project borrows from WebIOPi, a Python project which is specific to the Raspberry Pi. What I really liked about it was the REST interface, which lets you control the pins over a web browser. But I also wanted to branch out from just the Raspberry Pi, and hey, I also prefer Perl.

So I grabbed the REST interface and implemented in Device::WebIO::Dancer. It’s mostly compatible with WebIOPi’s interface, but I made a few changes where it tended to be too specific to the Raspberry Pi. For instance, Analog-to-Digital Converters need a voltref and a bit resolution. On the Rpi (or rather, expander boards for the Rpi, since it doesn’t have an ADC pin on its own), the voltref and resolution values are the same across all pins, but that’s not true on the PCDuino. The original interface didn’t specify such a pin, so I added a pin value to the call.

All such changes are documented in the pod for the REST API.

Hooking into @INC

In UAV::Pilot, there’s a shell called uav that’s meant to be an easy way to mess around. It takes arbitrary Perl commands and runs them through eval(). By loading up libraries into its context namespace, we can provide commands for all your basic UAV needs.

The old way of loading these libraries was to write them as a normal module, except without a package statement at the top. When you loaded a library, UAV::Pilot::Command would go digging around in the distro’s share dir. Once found, it slurps in the file, adds a package for the namespace we want, and hands it to eval().

This worked OK, but it had the downside that for development, you couldn’t make in-place fixes to the library. File::ShareDir gives you back the path to the distro’s system share directory, so you had to install the development distribution first and then find out if your fixes worked.

Then, I saw this tidbit in the perldelta for 5.20:

“Since Perl v5.10, it has been possible for subroutines in @INC to return a reference to a scalar holding initial source code to prepend to the file. This is now documented.”

Specifically, it’s documented in perlfunc under the require entry. But what I really wanted was the method of sticking a subroutine in @INC.

When you do this, the subroutine is passed a reference to itself, and a path to the file. It should return a list of up to four things:

  1. Reference to a scalar, which is text that will be prepended to the module
  2. Filehandle to read from
  3. Subref, which if there is no filehandle above, will be called in a loop to get the module text until it returns 0
  4. Subroutine state var

For my purposes, I only needed to return the first two.

This means the Commands modules can be in the regular module paths (though still without a package statement). This is was implemented in UAV::Pilot v1.0_0 trial release, and UAV::Pilot::ARDrone has been ported to the new way, too. WumpusRover should be forthcoming.

I’m letting the CPAN smoke testers go over the trial release before I put the big official v1.0 on the module, but we’re looking good so far.

No, Heartbleed isn’t likely to have been purposely introduced by the NSA/FBI/Mossad/Moon Nazis

As a rule, stupidity is more likely than malice. The simple proof of this is that it’s easier to be incompetent than it is to be some grand chessmaster who sees all the pieces and manipulates them at a high level. So it is with Heartbleed.

Consider what had to go wrong for this bug to be introduced:

  • Automatic checks in memory allocators were slow on a handful of platforms
  • OpenSSL devs decide to put in a compile flag for using their own allocator, which is fast on all platforms
  • OpenSSL devs stop testing builds compiled without the custom allocator
  • OpenSSL is a general mess, making verifying the code difficult, and for bugs to generally go unnoticed for a long time
  • The actual bug is introduced

A group that wanted to deliberately subvert OpenSSL would need all of that to go wrong. If OpenSSL had tested builds for all combinations of compile flags, Heartbleed wouldn’t have happened. If they hadn’t built a custom allocator in the first place, Heartbleed wouldn’t have happened.

Abandon Ship! It’s Time to Ditch OpenSSL

Theo de Raadt is known for general assholery, but when he says “OpenSSL is not developed by a responsible team”, there are very good reasons for him to say that. The project has been a mess for a long time, and this Heartbleed situation has brought it all to the forefront.

It’s long past time to ditch OpenSSL. Firefox and Chrome use NSS, which seems as good an alternative as any.

(Interestingly, there was a proposal to switch Chrome to OpenSSL just a few months ago. Yeah, let’s not.)