Hacker News new | past | comments | ask | show | jobs | submit login

>1. It is very very slow, ... below took 7s to generate with 4o, but 46s with GPT4.5

This is positively luxurious by o1-pro standards which I'd say average 5 minutes. That said I totally agree even ~45s isn't viable for real-time interactions. I'm sure it'll be optimized.

Of course, my comparing it to the highest-end CoT model in [publicly-known] existence isn't entirely fair since they're sort of apples and oranges.




I paid for pro to try `o1-pro` and I can't seem to find any use case to justify the insane inference time. `o3-mini-high` seems to do just as well in seconds vs. minutes.


What are you doing with it? For me deep research tasks are where 5 minutes is fine, or something really hard that would take me way more time myself.


I usually throw a lot of context at it and have it write unit tests in a certain style or implement something (with tests) according to a spec.

But the o3-mini-high results have been just as good.

I am fine with Deep Research taking 5-8 minutes, those are usually "reports" I can read whenever.


I bet I can generate unit tests just as fast and for a fraction of the cost, and probably less typing, with a couple vim macros


Idk, it is pretty good a generating synthetic data and recognizing the different logic branches to exercise. Not perfect, but very helpful.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: