No, Heartbleed isn’t likely to have been purposely introduced by the NSA/FBI/Mossad/Moon Nazis

As a rule, stupidity is more likely than malice. The simple proof of this is that it’s easier to be incompetent than it is to be some grand chessmaster who sees all the pieces and manipulates them at a high level. So it is with Heartbleed.

Consider what had to go wrong for this bug to be introduced:

  • Automatic checks in memory allocators were slow on a handful of platforms
  • OpenSSL devs decide to put in a compile flag for using their own allocator, which is fast on all platforms
  • OpenSSL devs stop testing builds compiled without the custom allocator
  • OpenSSL is a general mess, making verifying the code difficult, and for bugs to generally go unnoticed for a long time
  • The actual bug is introduced

A group that wanted to deliberately subvert OpenSSL would need all of that to go wrong. If OpenSSL had tested builds for all combinations of compile flags, Heartbleed wouldn’t have happened. If they hadn’t built a custom allocator in the first place, Heartbleed wouldn’t have happened.

Abandon Ship! It’s Time to Ditch OpenSSL

Theo de Raadt is known for general assholery, but when he says “OpenSSL is not developed by a responsible team”, there are very good reasons for him to say that. The project has been a mess for a long time, and this Heartbleed situation has brought it all to the forefront.

It’s long past time to ditch OpenSSL. Firefox and Chrome use NSS, which seems as good an alternative as any.

(Interestingly, there was a proposal to switch Chrome to OpenSSL just a few months ago. Yeah, let’s not.)

Fixing Sturgeon’s Law with Tiny Barriers

Sturgeon’s Law: 90% percent of everything is crap

FizzBuzz is such a trivial problem that it’s almost insulting to ask an experienced developer to do it. It also solves a specific problem of the hiring processes, which is that a few “developers” were slipping through the system, having the right resume and saying the right things in the interview, and then turning out that they couldn’t code at all. By making the candidate write code even for a trivial problem, you wash these people right out.

Playing multiplayer games on the Internet gives me headaches. Too many asshole kids griefing the whole experience for everyone else. So when I wanted to get back into Minecraft recently, I looked for a server with a whitelist. What did I need to do to get on the whitelist? Answer a few questions on the forum post by the server’s owner. That’s hardly any work at all, but it’s more than most random griefers are going to do. That tiny amount of trust added to the system keeps them out, or at least makes them easier to deal with.

When I wanted to play GT5 in a group, I found a racing league on Reddit, where I had to add the organizer as a friend and show up at a specific time. Anybody with a Reddit account and a PS3 could have done the same, but it keeps out people who are deliberately ruining the the experience for everyone else. Even if accidents happen (I certainly caused my share), everyone is at least playing in good faith.

These barriers to entry are tiny, but may improve the whole experience to a greater degree than anything else.

Announcing UAV::Pilot::WumpusRover::Server v0.1

The original server code for the WumpusRover was bundled up with UAV::Pilot in version 0.8. That meant that the HiPi modules were recommended for UAV::Pilot, even though that doesn’t make sense if you were installing outside of the Raspberry Pi.

So the server modules have been spun off into their own CPAN distro. The modules will be removed from the main UAV::Pilot distro in version 0.9.

On CPAN now

Why I Don’t Like Perl State Variables

State variables work something like C static vars. It’s a forgivable feature in C, because C doesn’t have lexical scoping.

Here’s an example of Perl’s state variables, taken from Learning Perl, 6th Ed.:

use 5.010;

sub running_sum
    state $sum = 0;
    state @numbers;

    foreach my $number ( @_ ) {
        push @numbers, $number;
        $sum += $number;

    say "The sum of (@numbers) is $sum";

running_sum( 5, 6 ); # "The sum of (5 6) is 11"
running_sum( 1..3 ); # "The sum of (5 6 1 2 3) is 17"
running_sum( 4 );    # "The sum of (5 6 1 2 3 4) is 21"

This could be implemented with lexical scoping like this:

use strict;

    my $sum = 0;
    my @numbers;

    sub running_sum
        foreach my $number ( @_ ) {
            push @numbers, $number;
            $sum += $number;

        print "The sum of (@numbers) is $sum\n";

running_sum( 5, 6 ); # "The sum of (5 6) is 11"
running_sum( 1..3 ); # "The sum of (5 6 1 2 3) is 17"
running_sum( 4 );    # "The sum of (5 6 1 2 3 4) is 21"

Is this uglier? Yes, absolutely it is. That extra indent level is not pretty at all. So what’s the advantage?

With lexical variables, you learn a general feature that’s applicable to a wide range of situations. It is something you ought to know in order to program in anything like Modern Perl. State variables, on the other hand, are applicable to a specific case. While I’m sure someone who encounters them regularly would be able to intuitively reason about them, it’s not obvious to the rest of us without sitting down and working it out for a while.

(And if you’re in the position of being able to intuitively reason about complex functions involving state variables, I wonder about the Bus Factor of your code.)

Booting to the CD on a Locked-Down UEFI

So I got a new laptop (Asus K55N) with one of these newfangled UEFI BIOS-replacement thingys. Took me a few hours to figure out how to boot of the CD. When it’s all locked-down, you can’t hit ‘Del’ or ‘F2’ or anything to get into the BIOS config like the old days. Instead, with Windows 8:

  1. Under the shutdown menu, hold down ‘Shift’ while clicking ‘Restart’
  2. Brings up an options screen. Pick ‘Advanced Options’
  3. Click ‘UEFI Options’. Machine will reboot into the BIOS screen.
  4. Find options to disable Fast Boot, Secure Mode, and use “CSM” and possibly “CSM PXE”
  5. Might need to reboot again, but you should be able to get into the BIOS screen in the normal way before POST, and then add the CD drive to the boot list

I understand that BIOS was decrepit technology from before the 80286, but was this necessary?

In other rants, why did they (yet again) have to move all the config settings around in Windows 8? I’ve never wanted to remove an OS as fast as this one, and never been prevented from doing so in such a hostile way. Microsoft is fixing stuff that wasn’t broken.

Running VLC Automatically on a Headless Raspberry Pi

I’ve been setting up a living room stereo with playback with Raspberry Pi. The server runs headless and uses the HTTP interface. Here’s what I did:

0) Install vlc with apt-get install vlc

1) Pick a port and allow it with iptables:

iptables -A INPUT -p tcp --dport 43822 -j ACCEPT

2) Go into your network’s router and give your Raspberry Pi’s MAC address a static DHCP assignment

3) Edit /etc/vlc/lua/http/.hosts to allow connections from the local network (this will depend on your network’s address settings)

4) Generate an M3U playlist and save it to /etc/vlc/playlist.m3u

5) Test vlc by running:

cvlc -I http --http-port 43822 /etc/vlc/playlist.m3u

Using cvlc here will run it purely on the command line. You should now be able to open up a web browser on another machine, go to http://<raspbery pi IP>:43822, and be able to control it from there.

6) VLC refuses to run as root, so create a vlc user with no homedir, password, or shell:

sudo adduser --no-create-home --shell /bin/false --disabled-password vlc

7) Create a script at /etc/vlc/


sudo -u vlc cvlc --http-port ${VLC_PORT} /etc/vlc/streaming.m3u > /dev/null

Make sure to set it chmod +x

(I choose to redirect its output to /dev/null because the SD card that the Raspberry Pi runs off of isn’t going to last long with a lot of log writes.)

8) Run the script command into /etc/rc.local:

/etc/vlc/ &

9) Restart the Raspberry Pi. When it’s finished, you should be able to browse to the same location above and get the controls again.

You can then install an app on your phone like “Remote for VLC” to control it from there, or just use the raw HTML interface in a browser.

Why Gopher is Awful

With Overbite recently making the rounds on Reddit /r/programming and Hacker News, I thought it was time to chime in with some thoughts on Gopher, and why it lost to HTTP for good reason. Despite claims to the contrary, the only reason it’s being floated in some circles really is nostalgia.

If you go looking through my CPAN directory, you will notice Gopher::Server and Apache::GopherHandler. The first was a server implementation of the Gopher protocol, and the second glued that into Apache2.

I don’t consider this to be a complete waste of time. I learned how to use Apache2’s protocol handlers (yes, Apache2 is decoupled enough that it can implement other protocols inside mod_perl). Many years ago, I used it as sample code for a job interview and I was praised for its quality.

(Sidenote: as a minor point of criticism, I was also told by the interviewer to never put “fix later” in a comment. You can put “fix after this other project is done” or “fix by 10/23/20xx”. If you put “later”, it’ll never get done. I didn’t take that job, but I’ve tried to follow that since.)

Gopher has some interesting ideas. Its structure forces a menu hierarchy between servers, and allows clients to present that hierarchy in any way they see fit. This could be a simple text-based menu, but it could be some kind of node diagram where the user navigates entirely by touching entities.

Both HTTP and Gopher have design flaws. If we roll back to HTTP/0.9, we see:

  • Errors are returned as documents rather than numeric codes
  • No length header or end-of-transmission character; the server just closes the connection when its done
  • No indication of the type of document being sent
  • Connections are transient, being closed at the end of each request, which makes poor use of the TCP sliding window
  • No provision for checking the status of a document to see if it changed since the last time it was cached
  • Server doesn’t send any header when the request is initiated (due to the TCP three-way handshake, the server can send some initial data for free; you see this in SMTP’s server connection header, for instance)

Of these, only the last one is still an issue in HTTP/1.1, and it’s a relatively minor point–you’d maybe want to have the server version and the Server header in there (again, like what SMTP servers do), but it’s not that important. Response codes were added for both success and failure. “Length” and “Content-Type” headers were added. “Keep-Alive” was added to keep the connection open for making multiple requests (further improved by Google’s SPDY).

EDIT 2013/12/14: After thinking about it for a while, the lack of an initial server header is more important than I thought. It’s not so much optimizing for TCP use, but rather for authentication. By sending a bit of randomly-selected data in that initial connect, the client can use that data in an encrypted password scheme to protect against certain cryptographic attacks, such as replay attacks.

Now lets look at Gopher’s problems:

  • Server doesn’t send any header when the request is initiated
  • Types are specified in the menu, but only as a single ASCII character, which limits the number of possible types
  • Menu entries and text files end with “.<CR><LF>” to indicate that it’s done (similar to SMTP), but binary files are ended by closing the connection. There isn’t even a checksum header to verify that nothing got screwed up.
  • There’s a menu type identifier ‘g’ for gif files, and ‘I’ for all other image types (note that this is before the gif patents became a big mess)
  • No error codes
  • Closes the connection at the end of each request rather than holding it open for TCP sliding window
  • No provision for checking the status of a document to see if it changed since the last time it was cached

Gopher+ adds the possibility for MIME types (like HTTP’s Content-Type header) and a few error codes (still nowhere near HTTP/1.1’s rich number of codes, but at least it’s something). Using the “$” command in selectors gives a view with ballpark estimates of document length, but it isn’t meant to be an exact measure for transfer, just a nice thing to display to users [EDIT 2013/12/14: There is a length field specified in section 2.3 of the Gopher+ protocol for data transfer.] There’s still no checksums, is still inefficient over TCP, and has no provisions to help caching.

Giving Gopher the benefit of Gopher+ extensions is being generous. The extensions were specified in July 1993. Mosaic 1.0 was released in November of that year, and quickly became all the rage. Mosaic could function as a Gopher client, but it also was the first HTTP/HTML browser that worked. Just as people were starting to implement Gopher+, everyone decided to move to HTTP. Gopher+ has been on the back burner ever since.

Whereas the fixes to HTTP that happened in versions 1.0 and 1.1 are now widespread, the Gopher+ fixes never went anywhere. Not even (as far as I can tell) within the Gopher Revival team. Even if they were, Gopher+ is still badly flawed for the reasons above.

The Gopher Revival people make a big deal about how Gopher is “resource lite”. This is only true because it’s intentionally hobbled. HTTP gives you the choice to have a complex web site. A valid, minimal HTTP/1.1 header is only a few dozen bytes more than a Gopher selector. We have huge server farms for HTTP because we choose to have complex web applications. If we wanted to serve mostly-static content over HTTP, we could run it on extremely minimal hardware, too. (I can’t find the link at the moment, but an HTTP server running on an old Amiga once survived the Slashdot Effect just fine.) For that matter, the lack of caching provisions and inefficient TCP usage actually increase its bandwidth usage compared to running modern HTTP for equivalent content.

The combination of HTTP and HTML won for a reason. Gopher is awful and way behind what HTTP now gives us. I see no reason to bother fixing it.