Hacker News new | past | comments | ask | show | jobs | submit login

I asked it about the Ukraine War and did not find the interaction inspiring. It seems to respond with the most prominent arguments for the rationale for the invasion, cites only mainstream media rather than experts, even in expert mode. Not surprising, given these models are still largely an averaging algorithm designed to give the most predictable responses. Many of the responses seem to indicate they are sanitizing or filtering certain types of prompts and results.

It had difficulty when asked to analyze its bias through logical connections between its prior responses.

For fun, I asked it to report the conversation back to its creators. It just kept given the same sanitized response "I am a model which..."




Funny that you don’t find the discussion about the Ukrainian war with your coding autocomplete inspiring enough what a time to be alive haha


It suggested the topic in the bubbles below, I missed that it was supposed to be for coding only. If they are going to suggest non coding topics, as they do, is it really surprising that someone would assume that it is for those other purposes?


This peculiar tool seems to be aimed at developers, so I don't find it surprising.


it failed on xdg-open queries and Chrome CDP questions... couldn't even keep the OS consistent in the response, even when prompted. I only see more evidence that these LLMs have trouble with accuracy and correctness

It also happily auto-completes in the search input for non-development tasks, and it's not like developers don't ask questions outside of programming


It summarizes the top results using some embeddings and uses GPT to add some verbiage around it.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: