Hacker News new | past | comments | ask | show | jobs | submit login

“The version of Sora we are deploying has many limitations. It often generates unrealistic physics and struggles with complex actions over long durations. Although Sora Turbo is much faster than the February preview, we’re still working to make the technology affordable for everyone.”

So they demo the full model and release the quantised and censored model.

Does anyone else find this kind of bait & switch distasteful?




You don't need to worry. Open source video is already pulling ahead of closed source.

Hunyuan [1] is better than Sora Turbo and is 100% open source. It's got fine tuning code, LoRA training code, multiple modalities, controlnets, ComfyUI compatibility, and is rapidly growing an ecosystem around it.

Hunyuan is going to be the Stable Diffusion / Flux for video, and that doesn't bode well for Sora. Nobody even uses Dall-E in conversation anymore, and I expect the same to hold true for closed source foundation video models.

And if one company developing foundation video models in the open isn't good enough, then Lightricks' LTX and Genmo's Mochi should provide additional reassurance that this is going to be commoditized and made readily available to everyone.

I've even heard from the Banodoco [2] grapevine that Meta is considering releasing their foundation video model as open source.

[1] https://github.com/Tencent/HunyuanVideo/

[2] Banodoco is one of the best communities for open source foundation AI video; https://banodoco.ai/


Maybe, but alternative would be to not demo results with state of the art processing at all, which I wouldn't like either.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: