Using encryption to hide secret messages may be as old as written language. For nearly all of that time, these schemes had a fundamental flaw: how do you tell the right people how to decrypt without letting the wrong people know as well?
Military commanders throughout history would have loved a good answer to this question. It’s been less than a century since we found a satisfying answer.
Take two big prime numbers (which are numbers that have exactly two integers that will evenly divide it: 1 and themselves). Multiply them together. Knowing that number, can you figure out how to get the original two primes?
This is the Prime Number Factorization problem. We don’t actually know the answer to that question, but you probably can’t do it in a reasonable amount of time. Oh, sure, I can give you 15 and you’ll quickly give the answer of 5*3. But what about this one:
There used to be a cash prize of $75,000 offered by RSA Labratories for figuring that one out.
Thing is, nobody has mathematically proven that there isn’t a quick way of factoring primes. They’re part of a set of problems called “NP”. In these problems, I can give you a possible solution and you can quickly verify if it’s correct. If I offered you two prime numbers and said that they could be multiplied together to make the big composite number above, you would be able to do the multiplication yourself and check my claim. Problem is, it’s hard to get that potential answer without coming up with a bunch of guesses and trying each one–in other words, brute force.
The set of “P” problems are the ones where you can generate the answer in polynomial time (which is about as slow as you can get before the computational complexity gets to be too much). The set of “NP” problems are the ones where you can take a potential answer and check if it’s right in polynomial time. The big question of complexity theory is if P and NP are the same set.
There are many other such problems, some of which would be useful to solve. A salesman trying to find the fastest route between a bunch of different houses is an NP problem. Maximizing the storage space of a knapsack holding a bunch of items of different weights and values, up to a certain limit, is also in NP.
One of the interesting things about NP is that if you prove any one of those problems can be solved in polynomial time, you’ll be able to generalize that approach to solve them all. At the same time, there doesn’t seem to be any reason why P and NP should be separate. But they probably are. The reason is that we’ve been trying really, really hard on some of these problems for a long time, and nobody has managed to unify the two. In the last 40 years or so, there’s been the extra incentive of breaking tons of encryption algorithms, and nobody has cracked it in that time, either. If math worked on the a posteriori evidence-based system of science rather than a priori logical proofs, then P!=NP would have been declared true years ago. (I can’t do this subject justice beyond this tl;dr version, so read the link above.)
This is not so great news for the salesmen and mountaineers of the world, but it’s great news for cryptographers. If indeed P!=NP, then the RSA encryption algorithm is secure.
Now, remember that RSA has to deal specifically with prime numbers. Since we can’t use just any number like we can with block ciphers, we need a much longer key length. Keys of 2048-bits are common. 4096 and 8192-bit keys are not unheard of, but there is some debate about their necessity. RSA Labs tends to argue that 2048 is good enough for most uses. This is because as the keyspace rises, memory usage tends rise faster than Moore’s Law.
RSA isn’t the only form of public key crypto. DSA, Diffie-Hellman, and Elliptic Curves are also examples. You may have heard that Quantum Computers will one day destroy RSA’s security, and then the NSA will be able to steal all our credit card numbers so they can order a huge arsenal of Nerf guns off Amazon. We’ll talk about Quantum Computers in a future post, but suffice it to say for now, we have some options for replacing RSA and other public key crypto if it comes to it.
When a key is generated, you’ll get a number n, which is the composite of the two original primes. This forms part of both the public and private key. You also get a number e, which is in the public part of the key. Finally, number d is only in the private key. Each of these has a mathematical relation to each other that all comes together during key generation.
Let’s say Alice and Bob have each others’ public keys, and Alice would like to send a message M to Bob. Alice turns M into a straight integer m by some means. Alice then computes c = me mod n. Except this takes an awfully long time and smoke starts to come out of her computer.
The reason for this has nothing to do with the P=NP nonsense from above, and everything to do with the fact that nobody has a computer with a word size that can handle a 2048-bit number. You need a bigint library to handle a number like that, and bigint libraries are slow.
After airing out her house and buying a new computer, Alice tries again, only this time she does something clever. Instead of trying to encrypt a whole message with RSA, she’ll generate a fresh block cipher key, and use that to encrypt the message. Since the block cipher key isn’t any more than 256-bits long, encrypting just that with RSA is no big deal. She does that, appends it to the start of the block-cipher encrypted message, and sends it along to Bob.
(See why block ciphers are so important, despite the advent of public key crypto?)
One thing to note here is that the block cipher is now part of the security chain. If Mallory wants to break the message, he can choose either RSA or the block cipher, depending on which one he thinks will be easier. This ultimately means that Alice has to pick a good block cipher. And once again, she has to make sure those keys come from a good random number generator.
Bob gets the message and decrypts the block cipher key by doing cd mod m. He now has the block cipher key and, knowing which block cipher Alice used, can decrypt the whole message.
Bob now wants to send a message of his own to Alice, but wants to make sure Alice can trust that it’s him. After all, anybody with Alice’s public key could encrypt a message and send it to her. Bob also thought he saw Mallory sneaking around the email server yesterday, and something about that guy seems off.
Fortunately, RSA has a solution here, too. In addition to encryption, RSA can sign a message. Using his own private key, he uses the same formula used for encryption with the public key. Anybody with the public key would be able to decrypt this message. A valid decryption means that the message must have come from Bob, since he’s the only one who could have encrypted the message this way.
Bob does this. This causes his CPU to turn into smoke. After airing out his house and buying a new computer, he takes a cryptographic hash of the message, signs that, and sends it to Alice. (The cryptographic hash would probably be somewhere between 256 and 512 bits long. We’ll talk about them more in the next post.)
Now Alice can verify that the message did come from Bob. Or rather, she verifies that it was encrypted with the private key that matches her public key for Bob. Does she know that this key actually came from Bob?
If she got it from Bob in person, then that’s great. If she got it over the phone, it’s probably OK, but she can’t guarantee it (can Mallory impersonate voices?). If she got it over email a long time ago and has sent many encrypted messages back and forth with Bob since then, it’s probably OK, too, but maybe not.
This is the problem of trust. There have been three different widespread solutions to this problem over the years:
- Alice knows somebody who knows somebody who knows Bob (the PGP solution)
- Alice and Bob both trust a central authority to take care of keys (the SSL solution)
- The first time they exchanged keys probably went OK (the SSH-in-practice solution)
In the early 90s, PGP came out with a web-of-trust system. It’s also used in GnuPG, which tends to be more common these days. It works like this: Alice meets Carol and exchanges key information. Carol seems pretty trustworthy, so Alice sets Carol’s key as being reliable in the key store. Carol also met with Bob and exchanged keys. Since Alice trusts Carol, Alice can also trust Bob. Of course, you don’t want to go through too many steps of trust, but with the Small World working in our favor, this can work out well as long as a lot of people are signing each others’ keys.
Trouble is, only us nerds want to sign each others’ keys. Seriously, there have been nerd parties just for this.
The next stab at the problem was SSL. Here, there’s a central party (called a “Certificate Authority” in SSL parlance) who everybody trusts. Alice and Bob and everybody else gives the CA their public key. The CA does some checks to see that Alice and Bob and everybody else are who they say they are, and then certifies that the key indeed belongs to this person. Your browser has a bunch of trusted CAs built in when you got it, so we avoid the nerd-party problem.
Trouble is, CAs do dumb things sometimes and don’t check a person’s identity very well. Cert authorities were being paid a lot of money to do a level of verification that could be done by a small shell script. When people realized this, the CAs started charging more money for “extended validation” certificates. In other words, to do the job they were supposed to be doing all along.
Those are the two most widespread solutions, but a third is done in practice by nearly everyone who uses ssh on a daily basis. When you first login to a server over ssh, you see a message like this:
The authenticity of host '10.0.0.2 (10.0.0.2)' can't be established. ECDSA key fingerprint is 16:f0:74:84:1a:3d:98:be:ad:af:f5:1a:ee:59:d4:73. Are you sure you want to continue connecting (yes/no)?
Have you ever called up the sysadmin and asked if that fingerprint is correct? Admit it: you just typed ‘yes’ and got on with work. Dentists are paid to scold you when you don’t floss. Cryptographers are paid to scold you when you don’t check your SSH fingerprints. On the other hand, we’ve all been doing this for years and things seem to get along OK.
I won’t argue that this is at all good practice. I secretly wonder sometimes if the web-of-trust system could be revived using social networking and smartphones, where “friending” someone meant scanning the QR code containing their public key (answer: probably not; it’d just be us nerds using our highly secure social networking system to organize D&D get-togethers). But the “trust on first connection” system seems to work out.
Wasn’t this supposed to be a Perl encryption primer? Let’s do some Perl.
The GnuPG Perl module is directly based on GnuPG, which itself is an implementation of OpenPGP as standardized in RFC4880. The details of key generation, encryption, and signatures are done for you. Since it all works with the GnuPG executable, you can use the command line or graphic tools to manage your keys and web of trust.
We create a key with
$gpg->gen_key( name => "Alice's Key", comment => "Alice's GnuPG key", email => 'email@example.com', passphrase => $secret, size => 2048, algo => RSA, );
The public key itself will be encrypted with a passphrase. Any other tools you use with this key will ask for the same passphrase before they can do anything. Be sure to use something typeable that you can remember.
We can now save those public and private keys back to the GnuPG database:
$gpg->export_keys( secret => 1, output => 'keys.sec', ); $gpg->export_keys( output => 'keys.pub', );
secret parameter, only public keys will be exported. Later on, we can import keys back with
$gpg->import_keys( keys => [ qw( key.pub key.sec ) ] );
Now that we have our keys, we can encrypt a message in a file:
$gpg->encrypt( plaintext => "file.txt", output => "file.gpg", armor => 1, sign => 1, passphrase => $secret, recipient => 'firstname.lastname@example.org', );
We assume here that a key matching the email ‘email@example.com’ will be loaded. Despite the name,
armor doesn’t provide any additional security; instead, it encodes the encrypted message in a variation of Base64. Pretty handy if you’re going to send the message over email or another system that tends to prefer text over binary. The
sign parameter adds a cryptographic signature to the message in addition to encryption. Since we’re signing the message with our default key in addition to encrypting, we’ll need to set
passphrase to the same value we used when we generated the key.
You can also do the signature alone with
$gpg->sign( plaintext => "file.txt", output => "file.txt.asc", armor => 1, );
After this, file.txt.asc will be ready to send to Bob. Bob can now verify Alice’s signature with:
$gpg->verify( signature => "file.txt.asc", file => "file.txt" );
This method will croak if the signature is not valid.
When Bob gets an encrypted message from Alice, he can do:
$gpg->decrypt( ciphertext => "file.gpg", output => "file.txt", passphrase => $secret, );
Once again, we need to set
passphrase to the value we used to generate the key.
There are alternatives to the GnuPG module in Perl, such as Crypt::OpenPGP, which follows a broadly similar interface. OpenSSL also implements things in its own way, which you can use with Crypt::OpenSSL::RSA. In fact, there’s a whole suite of modules under the Crypt::OpenSSL namespace.
Cryptography isn’t just about keeping secrets, though. There is also the matter of verifying the message. We’ll be covering that in the next post on Hashes.