Hacker News new | past | comments | ask | show | jobs | submit login

GPT-4 still confidently makes up sources for wrong answers and throws subtle mistakes (the obvious mistakes aren't as big a nuisance) into output.

This isn't to say gpt-4 isn't cool or impressive or a development to watch and learn about and be excited about, but I frequently see criticism dismissed as "you must be using 3.5" while I find 4 still costs more time than it would have potentially saved.




Of course this is possible but usually criticizing a dismissal like this as being wrong comes after it has been proven wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: