Hacker News new | past | comments | ask | show | jobs | submit login

Weird spiky things that are hard to characterise even within one specific model, and where the ability to reliably identify such things itself causes subsequent models to not fail so much.

A few months ago, I'd have said "create image with coherent text"*, but that's now changed. At least in English — trying to get ChatGPT's new image mode to draw the 狐 symbol sometimes works, sometimes goes weird in the way latin characters used to.

* if the ability to generate images doesn't count as "language model" then one intellectual task they can't do is "draw images", see Simon Willison's pelican challenge: https://simonwillison.net/tags/pelican-riding-a-bicycle/




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: