Default "secrets"
One of the simplest principles of cryptography is that the secret keys which are used for encryption must be kept, well, secret. Exposing the key to anyone other than the intended recipient of the message can pretty obviously lead to a compromise of the encrypted data. So, for example, hardcoding a secret key into a firmware image is unlikely to lead to secure communications using that key. Unfortunately, networking device makers—and the creators of free software firmware replacements for those devices—seem to have missed, or ignored, this basic principle.
The problem stems from the SSL keys that are installed into the firmware images for the devices. In many cases, those keys—including the supposedly private key—are generated when the image is built and then flashed into hundreds or thousands of different devices. If one can get access to the private SSL key, traffic encrypted with it (which might include HTTPS or VPN traffic) can be trivially decrypted. As the announcement of the LittleBlackBox project describes, it is, unfortunately, rather easy to obtain said keys; in fact the project provides a database of thousands of private keys indexed by their public key.
In practical terms, that means an attacker can access a vulnerable SSL-protected web site, retrieve the public key certificate, look up the corresponding private key, and decrypt any traffic that is sent or received by the web site. An attacker could also do a man-in-the-middle attack by pretending to be the site in question, as there would be no way to determine that the spoofer wasn't the real site. In order to do either of those things, though, the attacker must get access to the encrypted data stream.
Open, or weakly secured, wireless networks are the easiest way for an attacker to get that access—or to become a man in the middle. As the concerns over Firesheep have shown, there is still a lot of traffic that travels unencrypted over wireless networks. Ironically, HTTPS is touted as a solution to that problem, but that only works if the private keys are kept secret. For the web applications targeted by Firesheep, that is not likely to be a problem, as their private keys were presumably generated individually and kept safe. But for others who might be using wireless networks to configure their routers—or connect to another network via VPN—it could be a much bigger problem.
While reconfiguring your router from the local coffee shop may be a pretty rare event, even having the HTTPS-enabled web server available over the internet gives an attacker the ability to retrieve the public key, which can then be looked up in the LittleBlackBox database. If that SSL key is used for other things like VPN—something that might well be used at an open WiFi hotspot—that traffic is at risk as well. The right solution seems clear: don't supply default "secrets". In some ways, this problem parallels the longstanding, but hopefully improving, situation with default administrative passwords.
Device manufacturers and firmware projects should not be shipping SSL keys and either generate them at "first boot" or provide a way for users to generate and upload their own keys. There are a few different reasons that it isn't always done that way today, from concerns over devices having enough entropy to generate a random key to the amount of time it can take to generate a key on a slow CPU, but those reasons aren't really offset by the damage that could be done. Users who enable HTTPS access to their devices do so with the idea that it will be more secure, and can be used in places where unencrypted communication doesn't make sense.
There are also hurdles to overcome in either creating a key for each device (and/or firmware image) or providing instructions for users but, once again, that really doesn't help users that are relying on the device for secure communications. While some in the DD-WRT community don't see it as a big problem it is likely more serious than they are crediting. It would make far more sense to disable HTTPS access entirely—perhaps requiring a manual process to generate keys and enable that access—than it does to provide well-known keys.
While the problem highlighted by LittleBlackBox isn't earth-shattering, it does show the sometimes cavalier attitude towards security that is shown by some in the embedded device arena. When you are selling (or providing) a device or firmware that is meant to secure someone's network, it makes sense to proceed carefully. And to keep secrets, secret.
Index entries for this article | |
---|---|
Security | Embedded systems |
Security | Encryption/Key management |
Posted Jan 6, 2011 5:31 UTC (Thu)
by adamgundy (subscriber, #5418)
[Link] (23 responses)
the reason all the embedded devices are shipped with hardcoded keys is that the vendors have paid for a signed cert...
Posted Jan 6, 2011 10:54 UTC (Thu)
by Fowl (subscriber, #65667)
[Link] (2 responses)
Having thought about how else it could "worl" for a while, the only two ways I can think of are:
The vendor purchasing a FQDN. getting a CA signed cert for that domain, put that cert in the firmware image, then either:
* pointing it to a RFC 1918 address (internal, eg. 192.168.1.1)
All of which.. seems bad.
Am I close?
Posted Jan 6, 2011 17:35 UTC (Thu)
by adamgundy (subscriber, #5418)
[Link]
Posted Jan 7, 2011 11:48 UTC (Fri)
by james (subscriber, #1325)
[Link]
I imagine few users actually bother setting up static IP addresses, and many of those that do still use the router for DNS resolving (you don't know when your ISP is going to change their setup).
Posted Jan 6, 2011 13:35 UTC (Thu)
by alex (subscriber, #1355)
[Link] (8 responses)
We ship a USB with these systems for re-install, however I guess we'll have to make a custom key for every customer with their own unique signed certificates on them. I bet they won't keep the key secure either.
I wish their was a way to do it the SSH way, i.e. you've seen this machine once before so you can be sure it's the same machine.
Posted Jan 6, 2011 17:40 UTC (Thu)
by adamgundy (subscriber, #5418)
[Link]
Posted Jan 7, 2011 8:26 UTC (Fri)
by madhatter (subscriber, #4665)
[Link] (6 responses)
I agree that browsers handle this badly, but the better ones do handle it.
Posted Jan 7, 2011 13:57 UTC (Fri)
by ballombe (subscriber, #9523)
[Link] (5 responses)
Posted Jan 7, 2011 14:38 UTC (Fri)
by madhatter (subscriber, #4665)
[Link] (4 responses)
Similarly, once you tell the browser to cache a certificate, the certificate has the FQDN for which it's valid embedded inside itself (as the CN). That certificate, cached in a trusted cache though it be, can't be used to authenticate another site, even one using the same keypair (which shouldn't happen).
The two situations seem remarkably similar to me.
Posted Jan 7, 2011 15:04 UTC (Fri)
by rfunk (subscriber, #4054)
[Link] (3 responses)
Posted Jan 7, 2011 15:31 UTC (Fri)
by madhatter (subscriber, #4665)
[Link] (2 responses)
[madhatta@risby madhatta]$ ping foo -c 1
log out, reIP foo to 192.168.3.203, update risby's /etc/hosts, and try again:
[madhatta@risby madhatta]$ ping foo -c 1
I see no alert. I do see a warning that a key has been cached against a new IP address, but when I repeated this test (with that key then cached against the name and both IP addresses) I saw no message whatsoever.
I accept that keys are stored against ip addresses as well as against names, but I don't accept a general assertion that when "the address changes but the name and key remain the same, I get an alert about it". When the address is novel for that name, yes; other times, no.
Cacheing an SSL certificate in a browser creates an entity that links a public key and a domain name. SSH goes further than this, I accept, but it doesn't go all the way.
Remember that the original comment that started me off was
> I wish their was a way to do it the SSH way, i.e. you've seen this
I am not yet convinced that "permanently store this certificate" is not such a mechanism.
Posted Jan 7, 2011 17:21 UTC (Fri)
by giraffedata (guest, #1954)
[Link] (1 responses)
...
I see no alert. I do see a warning that a key has been cached against a new IP address,
You don't mean cached. The list of known hosts is not a cache. A cache is a local copy you keep to accelerate future lookups; the list of known hosts has an entirely different purpose.
It's interesting to see the detail that you can switch back to a previously seen IP address and SSH won't issue a scary message, but I'm not sure that affects any of this discussion, because the scary message on the original change is enough to trigger all the concerns.
SSH is wrong to do this, by the way. The whole point of SSL is that you don't trust the IP network routing, so you authenticate an identity that is independent of that. And the whole point of DNS is that you can move a server to another IP address (as you often must to change its physical location) and users don't see a change in identity.
And even if SSH is concerned the public key encryption could be broken and wants to offer the additional security of telling you the name resolution changed, it shouldn't associate the IP address with the key, but rather with the FQDN, resulting in the message, "Warning: adding IP address 192.168.3.203 to the list of IP locations for foo".
Posted Jan 7, 2011 19:11 UTC (Fri)
by madhatter (subscriber, #4665)
[Link]
In fairness to ssh, as I demonstrated above, it is doing exactly what you asked it to: putting up a message when it associates a new IP address with a known host name. But I agree the message could be more helpful.
I think this thread is rather separating those like ballombe, who do want to know when the IP of a server offering a service they use changes, from those who don't, like yourself.
I've found this thread most stimulating, and I now find myself having to sit down and think harder about what I want from an authentication service in a world where DNS is not trustworthy.
Posted Jan 6, 2011 23:13 UTC (Thu)
by iabervon (subscriber, #722)
[Link] (10 responses)
The sensible thing for browsers to do with SSL connections to private IP addresses is to (a) insist that they be self-signed certificates, because no CA in their list signing them could possibly be trustworthy; (b) tell the user to refer to the documentation for the device to find out how to verify the certificate; (c) ignore the subject of the certificate, since it's got to be meaningless, and use the fingerprint instead to find it again; (d) store a user-chosen name which will be displayed differently from a PKI-certified name.
Of course, it's a bit unclear how the device should communicate the correct fingerprint to the user. Probably the right way would be to boot the device at the factory, get its fingerprint, and print it on a label.
Posted Jan 6, 2011 23:27 UTC (Thu)
by adamgundy (subscriber, #5418)
[Link] (9 responses)
how has that improved security?
this is the entire problem. there's a good (in some sense of the word) reason for having these hard-coded, signed keys. the problem is that now it's busted, and there's no clear solution.
there are many people out there that think the whole 'self signed cert' scary warnings are useless, and should be ditched entirely - maybe just don't change the URL bar color if the cert doesn't match - on the grounds that some encryption (without authentication) is a whole lot better than no encryption. that doesn't play well with commercial sites, though, who are paranoid someone's spoofed their DNS and want the browser to throw a scary warning.
Posted Jan 7, 2011 0:15 UTC (Fri)
by iabervon (subscriber, #722)
[Link] (8 responses)
Personally, I like the method that Chromium uses: if a site is using https in a way that the browser doesn't trust, it crosses out the "https" in the URL in red and acts like it's a normal unsecured connection. It's hard for commercial sites to complain about this, since they don't want the browser to give big scary warnings for their http URLs, which are obviously not protected. But the browser should similarly cross out the "https" in the case where it's a certificate signed by a CA for something that the browser knows the CA didn't verify.
Posted Jan 7, 2011 0:17 UTC (Fri)
by dlang (guest, #313)
[Link] (3 responses)
Posted Jan 7, 2011 3:50 UTC (Fri)
by foom (subscriber, #14868)
[Link] (2 responses)
A cert for the IP address "192.168.0.1" though, is *NOT* fine, there's no way a CA could possibly verify that you own that address (since, well, you don't).
Posted Jan 7, 2011 6:21 UTC (Fri)
by dlang (guest, #313)
[Link] (1 responses)
Posted Jan 7, 2011 6:28 UTC (Fri)
by foom (subscriber, #14868)
[Link]
Posted Jan 7, 2011 0:28 UTC (Fri)
by adamgundy (subscriber, #5418)
[Link]
as far as whether a CA verified the IP address.. I don't think it's conclusive that they *didn't* verify it. most of these certs are on (consumer) routers, which have a default IP address. it's not beyond the realm of possibility that a CA verified that the IP address they're signing is the one the router uses. that's just as valid as the 'certification' they do for a domain name by sending out an email to postmaster@... and hoping 'postmaster' doesn't just click the link because 'it looked official' (and yes, I've seen that happen).
this is one of two recent problems that really have no good solution (read: the solutions are very expensive). firesheep being the other one, making session surfing ridiculously easy.
the only real, cost effective solution to these problems is an SSH style 'seen it' key repo in the browsers. the first time you visit a site with a self signed cert (which is otherwise valid), you get a very *non scary* warning that this is the first time you've visited the site. after that, no warnings whatsoever unless the cert changes. the problem with this solution is: IE6, IE7, IE8, Firefox < 4, Chrome < 6, etc, etc will still be throwing fits about 'invalid certs'.
Posted Jan 7, 2011 1:11 UTC (Fri)
by djao (guest, #4263)
[Link] (2 responses)
Your complaint, while valid, misses the biggest issue. It's a bit like ticketing a drunk driver for a seatbelt violation.
The biggest problem is that browsers are totally and utterly dependent on certificates for authentication. The widespread incorrect belief in the need for certificates represents the single biggest factor in perpetuating exactly the sort of insecure situations that this very article is about.
Do you trust SSH? As others here have pointed out, SSH (in its default configuration) uses no certificates. The program simply caches the key the first time it is used, and warns the user if the key ever changes. The SSH authentication model is nowadays called TOFU or "trust on first use." For someone setting up a wireless router, trust-on-first-use is perfectly fine. A user, even an unskilled one, is generally aware of the fact that they are setting up a router for the first time, and that they might have to click on boxes to accept a key.
There are many other wireless hardware devices with security implications (such as bluetooth keyboards) that already use TOFU authentication with great success. All the posters here who are complaining that it can't be done, that it would generate hundreds of support calls, are simply ignoring the fact that it not only can be done, but already is being done with no problems.
The fault in this case lies squarely with the browser manufacturers, for not supporting TOFU, and more generally for providing no authentication mechanisms whatsoever other than certificates. (Yes, a skilled user can achieve the equivalent of TOFU in Firefox. It takes five mouse clicks worth of scary dialog boxes. This doesn't count as support.) Secondary blame belongs to the companies that generate certificates, for lobbying browsers to require certificates in order to preserve their lucrative protection racket.
Posted Jan 7, 2011 17:49 UTC (Fri)
by scripter (subscriber, #2654)
[Link] (1 responses)
Posted Jan 7, 2011 19:05 UTC (Fri)
by iabervon (subscriber, #722)
[Link]
Posted Jan 6, 2011 10:46 UTC (Thu)
by Fowl (subscriber, #65667)
[Link] (3 responses)
The private key is just to prove that you are the server you say you are, either by a trusted 3rd party you already have the keys for or key continuity management - store the key the first time and hope that your first connection isn't compromised! ("the ssh model")
So yes, having the same private key would in effect allow anyone to pretend to be your device, but without MITM that shouldn't be that useful. That's not to say that it's a good situation, clearly SSL (and SSH!) keys should be generated on first boot, with an opportunity to upload "real" keys.
Or am I on the wrong track entirely?
Posted Jan 6, 2011 12:38 UTC (Thu)
by erwbgy (subscriber, #4104)
[Link] (1 responses)
Perhaps I misunderstand SSL, but I thought that the certificate was only useful to ensure the identity, not to encrypt the session. I mean each session has randomised session keys not based on the private key. The public and private keys are used when exchanging the session key, so if you have access to the private key then you will be able to find out the session key and decrypt the traffic. The Wikipedia TLS page explains this well:
Posted Jan 8, 2011 20:46 UTC (Sat)
by kleptog (subscriber, #1183)
[Link]
http://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_e...
It's a neat trick whereby the server and client can agree on a key over an insecure channel.
So this list is useful for MITM attacks but not always useful for eavesdropping. Now, if they have checked all these routers and confirmed that in fact DH is disabled by default, then we have a different problem indeed.
(Incidently, I just tried my own router and Firefox doesn't say whether DH is enabled or not. Maybe that means no.)
For the fun of it, try surfing the web and rejecting any SSL connections that don't use DH. You'd be surprised the number of sites that either (a) are incompetent or (b) want anyone who has the private to be able to sniff your traffic. There are a lot of sites which will accept DH if you ask for it but will default to no.
Posted Jan 6, 2011 15:07 UTC (Thu)
by jldugger (guest, #57576)
[Link]
Posted Jan 6, 2011 12:32 UTC (Thu)
by NAR (subscriber, #1313)
[Link]
I may be wrong here, but most of these devices don't have either a keyboard or a monitor attached to them. In order to configure them, they need to be connected to an other computer (with a keyboard and a monitor) - why not generate the keys there? I presume there's more than enough CPU power and entropy there. I've configured a new WiFi router just last week: I had to connect it with a UTP cable to the computer, put the attached CD into the computer, run the configuring program (of course, on Windows) and that program generated e.g. the WPA2 key. I didn't even need to access the web-based interface.
Posted Jan 6, 2011 16:20 UTC (Thu)
by rfunk (subscriber, #4054)
[Link] (3 responses)
Posted Jan 7, 2011 9:54 UTC (Fri)
by dsommers (subscriber, #55274)
[Link] (1 responses)
The argumentation which was used was that "these IP addresses are not valid any more and we will remove these iptables rules in the next release". That was without an ETA of the next release and it was nobody who saw any need of informing its users about this. Despite that a couple of simple 'nvram' commands was all which would be needed as a workaround.
So that the DD-WRT community does not see littleblackbox as a problem for their firmware, that does not surprise me at all. For me this is yet another reason why to stay away from DD-WRT.
I switched to X-WRT and later on to OpenWRT, and I find these two as much more open and secure router distributions. And it is quite easy to build the OpenWRT firmware yourself.
Posted Jan 17, 2011 11:42 UTC (Mon)
by eduperez (guest, #11232)
[Link]
Posted Feb 3, 2011 14:45 UTC (Thu)
by ddwrt (guest, #72712)
[Link]
the stuff here written here that "the DD-WRT people" do not care is not right.
We noticed this article (even now subscribed to lwn) and we'll take care on a solution.
Our main hassle with a solution right now is, that we on most platforms do not have enough space to put openssl for the key (and x509) stuff into the firmware.
Also we assume, that offering people the service somewhere "out in the web" to generate the keys will also lead into trust problems again.
The idea we right now have, is to use javascript on the browser to generate the RSA (locally) and the x509 certificate.
Default "secrets"
Default "secrets"
* configuring the device to engage in some sort of dns spoofing.
if I remember the Slashdot discussion correctly, I think they're shipping with signed keys for the default IP address, so eg https://192.168.0.1/ doesn't complain.
Default "secrets"
Default "secrets"
Shipping SSL enabled devices
same problem here. we support SSL, but customers generally don't have the knowledge to self-sign, and don't want the cost (or more likely haven't got the skillz) to buy a signed cert from someone (also, it's more tricky to get a signed cert for a LAN only machine). end result is typically it doesn't get used.
Shipping SSL enabled devices
Shipping SSL enabled devices
Shipping SSL enabled devices
They only store the certificate, which is much less secure.
Shipping SSL enabled devices
Shipping SSL enabled devices
Shipping SSL enabled devices
PING foo (192.168.3.202) 56(84) bytes of data.
64 bytes from foo (192.168.3.202): icmp_req=1 ttl=64 time=0.290 ms
[...]
[madhatta@risby madhatta]$ ssh foo
madhatta@foo's password:
Last login: Fri Jan 7 15:16:47 2011 from risby.home.teaparty.net
[madhatta@anni ~]$
PING foo (192.168.3.203) 56(84) bytes of data.
64 bytes from foo (192.168.3.203): icmp_req=1 ttl=64 time=1.50 ms
[...]
[madhatta@risby madhatta]$ ssh foo
Warning: Permanently added the RSA host key for IP address '192.168.3.203' to the list of known hosts.
madhatta@foo's password:
Last login: Fri Jan 7 15:16:58 2011 from risby.home.teaparty.net
[madhatta@anni ~]$
> machine once before so you can be sure it's the same machine.
Shipping SSL enabled devices
Warning: Permanently added the RSA host key for IP address '192.168.3.203' to the list of known hosts.
Shipping SSL enabled devices
Default "secrets"
Default "secrets"
Default "secrets"
Default "secrets"
Default "secrets"
Default "secrets"
Default "secrets"
https://www.globalsign.com/digital_certificate/options/pu...
Default "secrets"
Default "secrets"
As a minimum, browsers should identify that a device is using a PKI-issued cert for a private identity, and simply tell users that this can't possibly provide any meaningful security.
Default "secrets" and Trust On First Use
Default "secrets" and Trust On First Use
Default "secrets"
Default "secrets"
In order to generate the session keys used for the secure connection, the client encrypts a random number with the server's public key and sends the result to the server. Only the server should be able to decrypt it, with its private key.
Wrong, see Diffie-Hellman
Default "secrets"
There are a few different reasons that it isn't always done that way today, from concerns over devices having enough entropy to generate a random key to the amount of time it can take to generate a key on a slow CPU
Default "secrets"
Default "secrets" on DD-WRT etc
Default "secrets" on DD-WRT etc
Default "secrets" on DD-WRT etc
Default "secrets" on DD-WRT etc
Secondly, we don't trust in the right now random quality on embedded systems. (Ok, that is for sure better than having these "secret defaults").
We found stuff to do the RSA part already, but haven't finished off with the x509 part.