Hacker News new | past | comments | ask | show | jobs | submit login

That's a good point and it also brings to mind a way to fight it.

This http://arxiv.org/pdf/1412.1897v4.pdf could be one way. You could generate images which are not of someone who wants to protect his privacy, but tweak them to strongly correlate to an image of that person by the neural net, whereas it could be a picture of anything you have just optimized (the images could be just noise, or another person, etc). You could generate batches of those and upload them, and confirm to Facebook that they are indeed photos of the person who wants their privacy back. You could repeat this (automatically) and corrupt the weights in FB's neural net, which would overcome their face detection abilities for the individual in question.




Interesting paper, but I'm not sure that's entirely practical.

If I were Facebook, I'd look at the entropy of an image before attempting to classify it. Anything particularly high (noise) or low (squiggles) would be discarded before running through the classifier.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: