Hacker News new | past | comments | ask | show | jobs | submit login

There's a lot to like in this article, but I don't quite agree with the setup. I think it's better to think of "contrastive" approaches as being orthogonal to basic self-supervised learning methods - they represent an additional piece you can add to your loss function that results in very significant improvements. This approach can be combined with existing self-supervised pretext tasks.

I've discussed these ideas here, for those that are interested in learning more: https://www.fast.ai/2020/01/13/self_supervised/




BTW, one thing which makes it a bit hard to get into self-supervised learning is that the most common benchmarking task involves pretraining on Imagenet, which is too slow and expensive for development.

I recently created a little dataset that is specifically designed to allow for testing out self-supervised techniques, called Image网 ("Imagewang"). I'd love to see some folks try it out, and submit strong baselines to the leaderboard: https://github.com/fastai/imagenette#image%E7%BD%91


I might take you up on that. How does your dataset facilitate self supervised experimentation?

I've a good amount of experience playing with autoencoders but this is the first I've heard of contrastive learning.


I didn't mean to convey that we should abandon generative self-supervised methods, but I can see how comparing them gives that impression.

Agree that using them in conjunction would make sense, since generative methods could capture some features better and vice versa.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: