Hacker News new | past | comments | ask | show | jobs | submit login

An SVM is a quadratic program, which is convex. This means that they should always converge and they should always converge to the same global optimum, regardless of initialization, as long as they are feasible, I.e. as long as the two classes can be separated by an SVM.



The soft-margin SVM which can handle misclassifications is also convex and has a unique global optimum [0].

[0] https://stackoverflow.com/a/12610455/992102


> as long as the two classes can be separated by an SVM.

Are the classes separable with e.g. the intertwined spiral dataset in the TensorFlow demo? Maybe only with a radial basis function kernel?

Separable state https://en.wikipedia.org/wiki/Separable_state :

> In quantum mechanics, separable states are quantum states belonging to a composite space that can be factored into individual states belonging to separate subspaces. A state is said to be entangled if it is not separable. In general, determining if a state is separable is not straightforward and the problem is classed as NP-hard.

An algorithm may converge upon the same wrong - or 'high error' - answer; regardless of a random seed parameter.

It looks like there is randomization for SVMs for e.g. Platt scaling [1], though I had confused Simulated Annealing with SVMs. And then re-read Quantum Annealing; what is the ground state of the Hamiltonian any why would I use a hyperplane instead?

[1] https://news.ycombinator.com/item?id=37369783




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: