Hacker News new | past | comments | ask | show | jobs | submit login

This is a great rorschach test. Show these four images to someone hyping AI, and if they see evidence of a growing/emerging intelligence, you can diagnose them as being wholly unqualified to comment on anything related to AI.



I don’t get it, wouldn’t something like HuggingGPT be able to command stable diffusion to do this? Just because GPT can’t do this natively doesn’t mean it’s not possible with the right framework?


These images were all generated by an identical model. The fact that this individual has convinced themself that the model is improving indicates that they don't understand how these models are trained and deployed. Furthermore, any conclusions reached on such limited data reveal more about one's predisposed opinions than anything about the nature of the data. Show this person an ink blot and they very well may see an image of a superintelligent AGI.


I don't see how that diagnoses them as unqualified. The conclusion is unsupported.


Since 'gets anything wrong, ever' is the current goalpost for agi (per the Gary Marcus methodology), we have to judge human intelligence by the same stick. Since the author of this article misunderstood the gpt release process, they have proven they are a non sentient pile of trash brain, ready to be processed into hamburger.


Actually ... that's a reasonable goalpost, in my opinion. Yes, humans make careless mistakes. However, humans mostly make careless mistakes because A) their brains have to reconstruct information every time they use it, or B) they are tired. LLMs, as piles of linear algebra, have neither excuse. Their training data is literally baked into their weights, and linear algebra does not get tired.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: