Hacker News new | past | comments | ask | show | jobs | submit login

You've got to wonder, does matrix multiplication happen so often and is it so inconvenient to call as a function that it's worth complicating the syntax? I'm really not sure about this one.



Yes! Numpy is hugely popular in (at least) physics and machine learning and the main negative comment I get is "Wow that's horrible syntax". I'm still not sure they'll be convinced, however:

    trans_coef.T.dot(solve(H.dot(V).dot(H.T)), trans_coef)
becomes

    trans_coef.T @ solve(H @ V @ H.T, trans_coef)
but in MATLAB or Julia might be:

    trans_coef' * ((H * V * H') \ trans_coef)
....I want a backslash operator now!


A few fields that use matrix computations extensively that come to mind: robotics, graphics, control, machine learning.

I'd go as far as saying that matrix computation plays an important role in more or less every kind of programming except for web/app development.

edit: As another comment points out, the PEP actually answers this question too and contains a slightly more comprehensive list of disciplines where matrices find frequent use: http://legacy.python.org/dev/peps/pep-0465/#but-isn-t-matrix...


> You've got to wonder, does matrix multiplication happen so often and is it so inconvenient to call as a function that it's worth complicating the syntax?

It is empirically observed (and the identified rationale for this PEP) that matrix multiplication is done by overloading existing operators in popular Python libraries (going so far in Numpy as to have separate types differing primarily in whether the * operator does matrix or elementwise multiplication), and that this causes confusion, API fragmentation, and inefficiency, so, yes, it happens enough and is inconvenient enough that a general solution for doing it as an operator that doesn't interfere with existing operators is desirable.

Its not really "complicating the syntax", its just adding two new operators and corresponding magic methods, the structure of the syntax is unchanged, and these operators work just like other python operators, including their relationship to their corresponding magic methods.

Really, you should read the PEP (Python Enhancement Proposal) that is the linked article. It covers your question in great depth.


I'd say matrix multiplication is far more common than complex numbers, and Python natively supports those.


yeah but complex numbers don't introduce new punctuation aka symbolic noise.


When you have a multiplication of 5-10 matrices that has to be done in the correct order, the expression gets horribly unreadable when written as nested function applications. And those kinds of expressions are ubiquitous in numerical computing.


Agreed that deeply nested function application is hard to read, for any purpose (not just matrix multiplication), which is why anyone who cared about readability and verifying correct readability would split it over multiple lines/expressions. Method chaining, given the functions return the correct types, could be a way to avoid that deep nesting too.


Usually the algorithm is described in another document without several expressions & lines. "Readability" in some of these contexts means verifying that you've correctly typed in the expression.


Yep its key for a lot of technical computing and FORTRAN has had it for decades.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: