Dealing with weakness in SHA-1
The SHA-1 hash function serves an auxiliary role in a number of cryptography utilities, notably OpenPGP, where it is used to sign documents and generate key fingerprints. Researchers recently published an attack on SHA-1 that can find collisions in drastically shorter time than previously thought, accelerating the move to replace SHA-1. A contest is underway to select a replacement, that will be designated SHA-3, but it will not be standardized until 2012. Between now and then, there are several steps interested individuals can take to harden themselves against attack — starting with understanding just what a hash collision can and cannot compromise.
SHA-1 was created by the National Security Agency (NSA) in 1995. It computes a 160-bit hash or "digest" of any message less than 2^64 bits long. Like any cryptographic hash function, its value as a message authentication tool depends on it being mathematically hard to find a collision: two messages that generate the same hash value. A brute-force search would, on average, take 2^80 evaluations of the function to find a collision (80 being half of the 160 bit digest length). Such a search would find two arbitrary messages that result in the same hash, not allow an attacker to find a collision with any specific message, but the 2^80 steps of a brute-force attack serves as a metric for the comparative efficiency of other attacks.
In recent years, the most efficient known attack on SHA-1 required 2^63 evaluations — around 1/100,000th the number of steps for a brute force search, but still safely outside the reach of a real-world attacker. That changed in April of 2009, when Cameron McDonald, Josef Pieprzyk, and Phil Hawkes presented findings [PDF] at the Eurocrypt 2009 conference that lowered the bar to 2^52 — a 2,000-fold speedup over 2^63. The existence of such an attack is far from a crisis-level weakness, but the upshot is that it is better to start migrating away from SHA-1 while it is still relatively safe.
SHA-1 in free software cryptography
SHA-1 is used in public-key cryptographic systems, including the OpenPGP specification (RFC 4880) implemented on most Linux desktop and server distributions by GnuPG. Since SHA-1 is a hash function, and not a cipher, it does not play a direct role in encryption, but it is used for digital signatures. In addition, OpenPGP key fingerprints are created with SHA-1, and key fingerprints are in turn used in key revocation and modification detection codes (MDC).
An OpenPGP digital signature involves computing a hash of the original message, then encrypting the hash with the signer's private encryption key. To verify the signed message's integrity, the recipient also needs to be able to compute the same hash on the received text. That requires support from the software and the keys used — although OpenPGP supports multiple hash algorithms in addition to SHA-1, old DSA keys can only use 160-bit signatures. Historically, that meant SHA-1, although RIPEMD-160 is compatible as well. Consequently, selecting a stronger algorithm when signing messages is possible with an application like GnuPG, but in the worst case scenario a user wishing to avoid SHA-1 would need to create a new DSA2 or RSA signing key.
Key fingerprints are digests of public keys, useful for key management because they are considerably shorter than the key from which they are hashed — thus making them human-readable so they can be compactly referenced more easily. Applications often list keys in a user's keyring by their fingerprint, so a SHA-1 collision that results in two keys having the same fingerprint could cause user confusion or unpredictable application behavior. Perhaps more importantly, key revocation certificates reference keys using fingerprints, again opening the door to unpredictable behavior if the application finds two keys with identical fingerprints. OpenPGP specifies SHA-1 as the only hash algorithm for version 4 keys (the latest revision), so there is no current workaround for fingerprint collisions.
MDC is an OpenPGP system to provide message integrity-checking with less overhead and less stringent requirements than full digital signatures. RFC 4880 describes it as "analogous to a checksum." MDC also specifies SHA-1 as its sole hash algorithm, but because its modest goals cover message integrity but not authentication, the existence of collisions does not adversely affect it. The checksum-like usage of the hash algorithm in this context simply verifies that the message content was not altered or corrupted in transit.
Although 2^52-evaluation collisions represent a significant weakening of SHA-1, it is important to note that hash collisions are not as easy to exploit as broken ciphers. On the GnuPG users' mailing list, maintainer David Shaw evaluated some of the possible scenarios, such as attempting to forge a signature. Even with the easier-to-exploit MD5 collision problem, thus far no one has been able to create a phony signature to match the signature of an existing key; the closest anyone has come is to generate two keys that can be used to create the same signature — an attack with little practical value. The prevailing opinion on the IETF's OpenPGP Working Group list was much the same. A more likely problem is the unexpected behavior of applications when confronted with fingerprint collisions.
Practical migration and looking forward
Nevertheless, users are encouraged to transition away from SHA-1 usage to stronger hash algorithms. The US government has mandated deprecation of SHA-1 for its use by the end of 2010. There are several alternative hash functions available today, including the family known as SHA-2. SHA-2 includes several functions that are related but use different digest lengths: SHA-224, SHA-256, SHA-384, and SHA-512. The SHA-2 functions are algorithmically similar to SHA-1, and so would be vulnerable to the same type of attacks, but because of their larger digest size they remain significantly more secure.
The National Institute of Standards and Technology (NIST) is currently holding a competition to select a next-generation hash algorithm to be designated SHA-3. Submissions were due in October of 2008, and the final winner is expected to be announced in 2012.
Debian's Daniel Kahn Gillmor posted a step-by-step guide to migrating away from SHA-1 in GnuPG. Included are instructions for setting up signing algorithm preferences in gpg.conf, attaching digest preferences to a public key so that other users will select a stronger algorithm when sending a message, and generating a replacement for an old DSA key. It is an important read particularly for key replacement, because setting strong digest preferences must be done before generating a new key — otherwise GnuPG will default to using SHA-1.
GnuPG for its part is planning to change its defaults in future releases, so that newly-created keys will default to RSA instead of DSA, and be able to use newer hash algorithms. Gillmor has also proposed a tool to scan OpenPGP keys and offer suggestions to the user for strengthening them — including using the current format, key type and size, appropriate sub-keys, and several other parameters.
The trickier problem is the OpenPGP specification's inclusion of SHA-1 as the "hardwired" choice for fingerprinting, revocation, and MDC. A thread on the OpenPGP Working Group's mailing list exposes several points of view. Some think that the group should wait for SHA-3, some think a change is due now, and others think that hash collisions even on fingerprints are not a significant enough security risk to warrant changing the specification.
As Gillmor's migration guide indicates, Debian is trying migrate its developers, maintainers, and teams away from SHA-1 digests and DSA keys and towards RSA keys with SHA-512 digests. Likewise, the Fedora project has undertaken a concerted migration to SHA-2 hashes. Ubuntu's security team administrator Kees Cook says that that distribution will update its keys over time, but that there is no rush. OpenSUSE's Marcus Meissner echos that sentiment, observing that the distribution is phasing out SHA-1 and MD5 for signing, but that collisions do not constitute a security threat for simple download integrity checking. All four distributions already use RSA master keys to sign packages.
Shaw emphasized that the recent attacks on SHA-1 still require a
significant amount of work, and at best would allow an attacker to produce
two original documents that hash to the same value, which does not directly
impact most people's usage of OpenPGP. "This is not an attack where someone
could take an existing OpenPGP-signed document and make a new document that
matches the signature or the like.
" He advised individuals and maintainers
who know that their intended recipients can accept larger hashes to use
larger hashes, particularly when signing documents created by someone else
(such as at a key-signing event), but not to worry unduly about using SHA-1
when that is the only option. In other words, he said, walk, but don't
run, for the exits.
Index entries for this article | |
---|---|
Security | SHA-1 |
GuestArticles | Willis, Nathan |
Posted Jun 18, 2009 5:20 UTC (Thu)
by leonov (guest, #6295)
[Link] (3 responses)
Posted Jun 18, 2009 6:58 UTC (Thu)
by djpig (guest, #18768)
[Link]
Posted Jun 18, 2009 10:11 UTC (Thu)
by cortana (subscriber, #24596)
[Link]
Posted Jun 18, 2009 20:34 UTC (Thu)
by dlang (guest, #313)
[Link]
IIRC there is code in git (defaulted to off for perfomance reasons) that checks that files that have the same hash are actually the same and produces errors if they aren't
Posted Jun 18, 2009 6:55 UTC (Thu)
by djpig (guest, #18768)
[Link] (1 responses)
Posted Jun 18, 2009 10:45 UTC (Thu)
by dd9jn (✭ supporter ✭, #4459)
[Link]
For sure he would not try to play with rogue signatures but use some simple vulnerability to get access to a developer's machine or an FTP server or whatever. This is far easier in terms of time needed and cost involved than anything else. The signatures on packages are okay, but what does such a signature actually tell us: The package has been created by a project's developer! It does not tell us how diligent he manages his machine, from where he got the upstream source, what tools he used for building and editing, how strong the door to his room is protected, how he manages his secret key, what other software is running on the development box, whether he uses HTML mail (like the author of the article) and thus possible easier to attack mail readers, and so forth.
Precomputed collisions in a properly setup PKI, like the one used by Debian, requires the secret key of the developer. Thus such an attack is irrelevant - we can't protect ourself against a rogue developer.
Rushing out mainly unneeded fixes is one thing, preparing for the future is better and that is what we are doing with GnuPG. In a few years we will have a large installation base of modern GnuPG versions and then switching to a more modern algorithm will be far easier than what Daniel is currently proposing.
Posted Jun 18, 2009 17:16 UTC (Thu)
by GreyWizard (guest, #1026)
[Link]
http://csrc.nist.gov/groups/ST/toolkit/key_management.html
Posted Jun 18, 2009 22:09 UTC (Thu)
by jmayer (guest, #595)
[Link] (1 responses)
The statement "Even with the easier-to-exploit MD5 collision problem,
Posted Jun 19, 2009 14:33 UTC (Fri)
by n8willis (subscriber, #43041)
[Link]
Nate
Posted Jun 30, 2009 12:00 UTC (Tue)
by forthy (guest, #1525)
[Link] (1 responses)
One way to improve the strength of a signature is to sign with salt,
i.e. sign random number + document instead of document alone (you can put
the random number into the hash key accumulator as starting point). This
basically removes the possibility to create a pair of documents that will
result with the same hash in advance, because the random number of the
signer is still unknown (unless of course, the hash has a vulnerability,
where a known sequence of bytes removes the history in the accumulator).
This is a remedy that can be implemented right now, even with SHA-1.
Several of the SHA-3 proposals recommend something in that direction,
though e.g. Bruce Schneier recommends to start with your public key as
salt - this is less useful, since the public key is known to the attacker.
Though a document with several signers makes it a lot more difficult for
him.
Posted Jul 1, 2009 6:28 UTC (Wed)
by xoddam (subscriber, #2322)
[Link]
Effectively, someone can provide you with a document to sign, and instead of signing the document you give you, you add some nonce to it and sign the result instead. The nonce (salt) can be from /dev/urandom or some ascii art or whatever. Then you and/or the originating party can forward the document you *did* sign, including the nonce, to whomever it concerns.
The reason for not salting with your own public key is not that other people *can* know your public key; but that an attacker doesn't know it in advance and therefore cannot prepare two documents with the same hash *and* the same salt before presenting one to you to sign.
Historically, salts were used for preventing dictionary attacks on /etc/passwd on the same principle: an attacker might know all the words in the dictionary in advance, but cannot possibly pre-compute each of them with every possible salt. But if /etc/passwd is world-readable, the attacker knows a much smaller range of possible salts in advance too, hence /etc/shadow and hence your recommendation for randomness.
Dealing with weakness in SHA-1
Dealing with weakness in SHA-1
Dealing with weakness in SHA-1
Dealing with weakness in SHA-1
Dealing with weakness in SHA-1
As Gillmor's migration guide indicates, Debian is trying migrate its developers, maintainers, and teams away from SHA-1 digests and DSA keys and towards RSA keys with SHA-512 digests.
Using Debian as the subject of any sentence is always questionable ;). Notably, the position of the Debian keyring maintainer is a little more relaxed than the article suggests.
Dealing with weakness in SHA-1
NIST Recommendations
http://csrc.nist.gov/groups/ST/toolkit/documents/SP800-57...
Dangerously wrong?
understand things incorrectly.
[...] the closest anyone has come is to generate two keys that can be
used to create the same signature; an attack with little practical
value." seems to be dangerously wrong: This attack has been successfully
exploited in a place where it could do maximum damage to everyone still
using MD5:
http://www.win.tue.nl/hashclash/rogue-ca/ (MD5 considered harmful today
- Creating a rogue CA certificate)
So with the recent breakthrough on SHA-1 attacks and things like openCL
allowing highly parallel computations on $200 graphics GPUs: Isn't the
same attack doable with SHA-1 now?
Dangerously wrong?
Sign with salt
Sign with salt