Hacker News new | past | comments | ask | show | jobs | submit login

It would require hardware changes, but we already have all the required cryptographic primitives to mostly mitigate these concerns.

Trusted timestamping can attest that a given piece of media was not created after a given instance in time.

Trusted hardware running trusted code, combined with remote attestation, can offer a high-degree of assurance that a given video clip is not a product of CGI/ML.

Digital signatures by the entity responsible for a video clip's capture/creation mitigates the creation of media not sanctioned by that entity. Subjects in a particular piece of media could additionally apply their digital signatures to further increase trust in its authenticity. (This is effectively a cryptographically enforced version of your third point.)

This isn't a silver bullet, and suffers all the pitfalls normally encountered in the implementation of cryptographic schemes.




Your thoughts are very interesting, but I'm afraid remote attestation may not be enough. Processor instructions aren't sufficient as verification. Suppose I use AI to create a deepfake, then open up my phone, grab the wire which goes from the camera to the microprocessor, and rewire it so it goes from an Arduino to the microprocessor. Then I program the Arduino to send a signal to the microprocessor just like the one my camera would send to the microprocessor, had the camera just taken my deepfake as a photo.

I suppose a chip inside of the camera (in a tamper-resistant housing that self-destructs if you attempt disassembly) which signs the photo before sending it to the microprocessor could be sufficient in practice. Especially if the provider of the key infrastructure embeds a separate key in every camera, and monitors for abuse.

Anyway who is gonna build this? Perhaps Google could start selling special cameras to journalists, such that if the photos in a story on Google News were taken using that camera, the story in Google News gets a special checkmark next to it?


It wasn't elaborated on in the original comment, but yes you're absolutely correct, the imaging sensor itself would also need to be part of the cryptographic chain-of-trust. This is likely the most crucial part of such a system, as all the other hardware aspects are already starting to become generally available.

> Anyway who is gonna build this?

If and when specialized imaging sensors become available, I don't see this being a huge lift. One wouldn't even need purpose-built cameras for all but the most high-assurance recording requirements. Remote attestation and trusted code execution is already being incorporated into most newer devices. Bringing it all together would mostly be a software problem at that point.

Of course, there would also need to be a value proposition to justify any kind of work towards this. I personally see this as an inevitability given the frightening pace of progress in the ML field, and the increasing public concern over things like deepfakes.

Thank you for your comments!


You could start a startup building the specialized imaging sensors, with the aim of being acquired by $bigtech so they can tell congress "yes we are doing something about deepfakes" and eventually tell consumers "you can tell that photos taken by our devices/posted to our site are real, the other sites are full of BS"

I don't see how it all comes together without a partnership between the tech companies that display media (Youtube, Facebook, Twitter, etc.) and the tech companies that provide hardware (Apple, Google, Samsung, etc.) Maybe what's needed is to go through a nonprofit like the Partnership on AI.

Google is an interesting target because they are basically the only company that's both a media displayer and a hardware provider. If Android devices get a "verified non-deepfake" capability before Apple devices get it, I could see that as being a significant leg up for Android actually. It encroaches on Apple's traditional branding of "we make products for important people". I wonder if the Titan M has capabilities which could be leveraged. https://android-developers.googleblog.com/2018/10/building-t...

Google would probably sell loads of Pixel phones if it upranked "verified non-deepfake" Youtube videos recorded using Pixel cameras & microphones. Every aspiring Youtube star would know to record using a Pixel. The deepfake thing is a pretty good excuse to defend against an antitrust lawsuit, especially if Google can say that the capability will eventually roll out to competing hardware.


I can just project a video into a camera's lens...


Fun fact: the original Apollo TV broadcast from the surface of the moon was "slow scan", with a different resolution and framerate to broadcast TV. The conversion process was to display it on a suitable slow-scan TV (long retention phosphor) and .. point a terrestrial camera at the TV.


Sure, and if you eventually get caught, nobody would trust anything that you've digitally signed.


You can say that just as well about a world without the camera hocus pocus part.


In practice, not really.

The key difference is that digital signatures allow revocation-of-trust to be an effortless, automatic and delegatable process. As opposed to the current process that involves individual (and intrinsically limited) knowledge of who can be trusted. Frequently one doesn't even have provenance information for a media item, which is a prerequisite for making a trustworthiness judgment call.

The "hocus pocus" is literally the point that distinguishes this scheme from the plainly non-working system that we have today.


Just because something is different from something that doesn't work doesn't mean the different thing will work: most things don't work.

I'm not seeing anything in your description that couldn't also be said about using PKI to allow people to sign stuff they published. (And better: it's not broken by non-deceptive edits, like cropping or brightness adjustments)-- except that PKI already exists so we can't imagine that it would magically solve these problems. :) (and there is a long long history of people thinking varrious PKI ideas will magically solve varrious problems and then failing to solve them at all)

(I mean cameras that produced 'signed' output is also a thing that has previously existed too: https://blog.elcomsoft.com/2011/04/nikon-image-authenticatio... and was never particularly popular even with the target market)


Trusted by whom?


Trusted by whatever certification entities (ideally multiple ones, from non-aligned states) decide to deem a certain piece of hardware as "trusted". And then by extension, by any individual that trusts those particular certification authorities.

Yes, I'm aware this leaves the door open for subversion of certification authorities by nation-states. It's still better than the nothing we have today.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: