Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
kibbi
51 days ago
|
parent
|
context
|
favorite
| on:
OpenAI Audio Models
The sample sounds impressive, but based on their claim -- 'Streaming inference is faster than playback even on an A100 40GB for the 3 billion parameter model' -- I don't think this could run on a standard laptop.
Consider applying for YC's Summer 2025 batch! Applications are open till May 13
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: