Hacker News new | past | comments | ask | show | jobs | submit login

This is wrong!

The term hyperplane already assumes that the hypothesis space that your learning algorithm searches has some kind of dimension and is some variant of an Euclidean / vector space (and its generalisations). This is not the case for many forms of ML, for example grammar induction (where the hypothesis space is Chomsky-style grammars) or inductive logic programming (hypothesis space are Prolog (or similar) programs), or, more generally, program synthesis (where programs form the hypothesis space).




It can also just be some sort of partitioning. I would be really surprised if there was no partitioning of some space.


Note that "some sort of partitioning" isn't a hyperplane. A partition is a set-theoretic concept. A hyperplane is (a generalisation of) a geometric concept, so has much more structure.


Alright how about coalgebra.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: