My name is on this talk, but you should know: not only did I not go to Black Hat to help deliver the talk, but I did zero work on its preparation; this is Javed Samuel and Tom Ritter's talk all the way.
How my name came to be on it is a long, boring story.
Is this an anti-endorsement of the talk, then, or are you just trying to give credit where it's due? I thought it was pretty interesting, and I'm going to try to move to ECC over RSA where possible (for private SSL certs internally, SSH keys, etc). I know that isn't quite the point, but it seems like it might help future-proof a bit.
I endorse the talk wholeheartedly but for reasons even I don't understand my name is first on the title slide and I don't want to claim any undue credit.
Would you be willing to answer what sort of linear algebra is required in the function field sieve that the talk claims is difficult to parallelize? I write open source parallel linear algebra software [1] and would be happy to contribute.
The matrices involved are generally modulo a prime (2 in factorization, a fairly large prime for the discrete logarithm), and very sparse. The latest factorization record, RSA-768, involved finding elements of the nullspace of a 192e6 x 192e6 matrix, with an average of 144 nonzeroes per row.
Interesting. That isn't excessively large for distributed-memory Krylov-subspace eigensolvers (e.g., SLEPc). Do you have a good reference for the current algorithms used? And how large is the nullspace typically?
EDIT: I assume this [1] is what you're referring to, which draws from the following algorithms of Coppersmith [2,3].
Yes, that is correct. Generally the block Lanczos variant used in practice is Montgomery's [1], though, not Coppersmith's.
Lanczos was popular until a few years ago, as it requires slightly less actual matrix-vector multiplications, but since Wiedemann can be partially distributed it is becoming more desirable.
How my name came to be on it is a long, boring story.