"In this paper, we study the existence of multicollisions in iterated hash functions. We show that finding multicollisions, i.e. r-tuples of messages that all hash to the same value, is not much harder than finding ordinary collisions, i.e. pairs of messages, even for extremely large values of r. More precisely, the ratio of the complexities of the attacks is approximately equal to the logarithm of r. Then, using large multicollisions as a tool, we solve a long standing open problem and prove that concatenating the results of several iterated hash functions in order to build a larger one does not yield a secure construction. We also discuss the potential impact of our attack on several published schemes. Quite surprisingly, for subtle reasons, the schemes we study happen to be immune to our attack."
In that paper they're concatenating two hash outputs which obviously (right?) only makes it easier to reverse the hash - the attacker need only pick whichever hash he can work with quicker, and find the preimage for that. Given the fact that password entropy is much lower than hash entropy, the preimage thus found is almost certainly the original password, not a collision - rendering solving the other hash preimage moot.
If you want to force the attack to go through two hashes, you'd use functions sequentially (e.g. F(G(input)), not F(input)+G(input)), which the paper at least initially doesn't talk about. I didn't read the whole thing, mind you...
Note that password-stretchers like bcrypt/PBkdf2 use only fairly small extensions to this idea, so clearly in general the construction isn't known to be flawed.
Shouldn't the reduction is input space for the subsequent hash functions actually make it easier to find collisions ?
Or is finding collisions not closely related to input space ?
The paper is a bit above my level of understanding and I tried making sense of how the cryptanalysis is done to no avail.
https://link.springer.com/content/pdf/10.1007%2F978-3-540-28...
Abstract:
"In this paper, we study the existence of multicollisions in iterated hash functions. We show that finding multicollisions, i.e. r-tuples of messages that all hash to the same value, is not much harder than finding ordinary collisions, i.e. pairs of messages, even for extremely large values of r. More precisely, the ratio of the complexities of the attacks is approximately equal to the logarithm of r. Then, using large multicollisions as a tool, we solve a long standing open problem and prove that concatenating the results of several iterated hash functions in order to build a larger one does not yield a secure construction. We also discuss the potential impact of our attack on several published schemes. Quite surprisingly, for subtle reasons, the schemes we study happen to be immune to our attack."