What you say about almost all linear algebra routines being multiplication sounds crazy to me. How are the following supposed to be multiplication: (1) computing the transpose, (2) finding a basis for the kernel of a matrix, (3) factorizations, such as QR, LU or SVD (3) computing the Jordan normal form of a matrix, etc.?
It's not as crazy as it sounds, though I wouldn't make such a blanket statement. Things can be phrased in terms of matrix multiplication (maybe not a single matrix multiplication), it's just not the most efficient way to go about it.
1. Transpose is a linear map on matrices (vectors in R^nm), so in a very concrete sense it is precisely a matrix multiplication. And it's not hard to figure out the matrix, because it's analogous to the matrix that swaps entries in vectors.
2. Finding the first eigenvalue can be approximated via matrix multiplication, and for sparse matrices with good spectral gap this extends to all of them.
3. Row reduction can be phrased as matrix multiplication, and hence finding the basis for the kernel of a matrix is a byproduct of it, as is computing eigenvectors when given eigenvalues.
4. Computing orthonormal basis is a sequence of projection operators (and all linear maps are matrix multiplications)
5. I'm pretty sure nobody computes the Jordan canonical form on computers.
The point is that the connection between matrices and linear maps is the spirit of linear algebra.
It is as crazy as it sounds because we were talking about implementing an actual language/library, not about doing the linear-algebra equivalent of restating Turing's thesis.
> we were talking about implementing an actual language/library
No, we're talking about language-level syntactic standardization of the most fundamental computational concept in linear algebra (and I would argue all of mathematics). My list just gives evidence to how unifying the concept is.
If you want supercomputing-grade efficiency you won't stop using numpy/scipy just because of this update, or you've already rewritten your Python prototype in C/Fortran anyway.
> If you want supercomputing-grade efficiency you won't stop using numpy/scipy just because of this update
Especially since the update is just creating a language feature for numpy and similar libraries to leverage -- there is still no matrix implementation (and thus no matrix multiplication) in the standard library, just an overloadable operator so that numerical libraries can consistently use it for matrix multiplication and use * for elementwise multiplication, providing a standard API for those libraries.
Lots of factorizations such as the NMF can be implemented as a hill climb where you just multiply things against other things (and normalize elementwise)