Hacker News new | past | comments | ask | show | jobs | submit login

I grew up watching Looney Tunes interpretation of physics and turned out just fine.



There's big difference between cartoonishly incorrect and uncanny valley plausibly correct.


There's a huge amount of such stuff in movies.

Special effects, weapons physics, unrealistic vehicles and planes, or the classic 'hacking'.


There’s also a huge difference in what people, even children, expect when sitting down to watch a movie versus seeing a clip of some funny cat/seal hybrid playing football while I’m looking for the Bluey episode we left off on. My daughter is almost five and cautiously asks “is that real?” about a lot of things now. It definitely makes me work harder when trying to explain the things that don’t look real but actually are; one could definitely feel like it takes some of the magic away from moments. I feel alright in my ability to handle it, it’s my responsibility to try, but it isn’t as simple as the Looney Tunes argument or, I believe, dramatic effects in movies and TV.


Yet, in a movie setting it's clear something is a special effect or alike which is not the case for GenAI. Massive underestimation of the potential impact in this thread, scary.


Maybe. Or maybe some people massively underestimate our ability to cope with fiction and new media types.

I am sure that there were people decrying radio for all these same reasons (“how will the children know that the voices aren’t people in the same room?”)


Not a bad point, those representations have, in some cases, caused widespread misunderstandings among people who learn about those concepts from movies... and this is all while simultaneously knowing "it's just a movie".


Yes but a movie is a movie whereas these AI-generated videos will likely be used to replace stock footage in other (documentary, promotional, etc.) contexts


If the producer wants to publish bad physics, they get bad physics.

If the producer wants to publish good physics, they get good physics.

It doesn't matter if it is AI, CGI, live action, stop motion, pen-and-ink animation, or anything else.

The output is whatever the production team wants it to be, just as has been the case for as long as we've had cinema (or advertising or documentaries or TikToks or whatevers).

Nothing has changed.


You don't have full control over AI-generated images though, or not to the same extent producers have with CGI.

There's a video on sora.com at the very bottom, with tennis players on the roof, notice how one player just walks "through" the net. I don't think you can fix this other than by just cutting the video earlier.


There's already techniques for controlling AI generated images. There's ControlNet for Stable Diffusion and there are already techniques to take existing footage and style-morphing it with AI. For larger budget productions I would anticipate video production tooling to arise where directors and animators have fine grained influence and control over the wireframes within a 3D scene to directly prevent or fix issues like clipping, volumetric changes, visual consistency, text generation, gravity, etc. Or even just them recording and producing their video in a lower budget format and then having it re-rendered with AI to set the style or mood but adhering to scene layout, perspective, timing, cuts, etc. Not just for mitigating AI errors but also just for controlling their vision of the final product.

Or they could simply brute force it by clipping the scene at the problem point and have it try, try again with another re-render iteration from that point until it's no longer problematic. Or just do the bulk of the work with AI and do video inpainting for small areas to fix or reserve the human CGI artists for fixing unmitigatable problems that crop up if they're fixable without full re-rendering (whichever probably ends up less expensive).

Plus with what we've recently seen with world models that have been released in the last week or so, AI will soon get better at having a full and accurate representation of the world it creates and future generations of this technology beyond what Sora is doing simply won't make these mistakes.


>You don't have full control over AI-generated images though,

So the AI just publishes stuff on my behalf now?

No, comrade.


People don't watch The Matrix expecting a documentary on how we all got plugged in. If someone generated the referenced ladybug movie for use in a science classroom, that's a problem.


I agree. The issue is in using it for teaching science though, not in generating it.

Similar to how it's fine to create fiction, but not to claim it to be true.


And it's already harmful in some cases. E.g. people drag people out of a crashed car because they think it's going to explode, sometimes seriously injuring them.


Did you see the movie Battleship? Or a good percent of recent and not so recent action movies, at least Matrix could be argued that it was about a virtual reality.


"A body at rest remains at rest until it looks down and realizes it has stepped off of a cliff."


these will be a lot less violent too ;-) for a little while at least.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: