Hacker News new | past | comments | ask | show | jobs | submit login

Maybe I'm misunderstanding this but this doesn't seem like an accurate explanation of overfitting:

> In machine learning (ML), overfitting is a pervasive phenomenon. We want to train an ML model to achieve some goal. We can't directly fit the model to the goal, so we instead train the model using some proxy which is similar to the goal

One of the pernicious aspects of overfitting is it occurs even if you can perfectly represent your goal via a training metric. In fact it's even worse simetimes as an incorrect training metric can indirectly help regularise the outcome.




You might be misunderstanding here what the "goal" is. Your training metric is just another approximation of the goal, and it is almost never perfect. If it is perfect, you cannot overfit, by definition.


No, because your training data is only an appropriation of the actual workload.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: