Hacker News new | past | comments | ask | show | jobs | submit login

The most important interpretation IMO is that a matrix is a specification for a linear map. A linear map is determined by what it does to a basis, and the columns of a matrix are just the list of outputs for each basis element (e.g. the first column is `f(b_1)`. The nth column is `f(b_n)`). If A is the matrix for f and B the matrix for g (for some chosen bases), then BA is the matrix for the composition x -> g(f(x)). i.e. the nth column is `g(f(b_n))`.

The codomain of f has to match the ___domain of g for composition to make sense, which means dimensions have to match (i.e. row count of A must be column count of B).




It's debatable what the "most important" perspective is. For example, if I need a bunch of dot products between two sets of vectors, that doesn't seem like a linear map or a change of basis (not to me at least), and yet it's exactly what matrix multiplication is, just calculating a bunch of dot products between two sets of vectors.

Or when I think about the singular value decomposition, I'm not thinking about linear maps and change of basis, but I am thinking about a sum of many outer product layers.


If you don't have a linear map in mind, why do you write your dot products with one set of column vectors and another set of row vectors? Computationally, the best way to do dot products would be to walk all of your arrays in contiguous memory order, so the row/column thing is an unnecessary complication. And if you have more than 2 matrices to multiply/"steps of dot products to do in a pipeline", there's almost certainly a relevant interpretation as linear maps lurking.

Outer products are one way to define a "simple" linear map. What SVD tells you is that every (finite dimensional) linear map is a sum of outer products; there are no other possibilities.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: