I asked it about the Ukraine War and did not find the interaction inspiring. It seems to respond with the most prominent arguments for the rationale for the invasion, cites only mainstream media rather than experts, even in expert mode. Not surprising, given these models are still largely an averaging algorithm designed to give the most predictable responses. Many of the responses seem to indicate they are sanitizing or filtering certain types of prompts and results.
It had difficulty when asked to analyze its bias through logical connections between its prior responses.
For fun, I asked it to report the conversation back to its creators. It just kept given the same sanitized response "I am a model which..."
It suggested the topic in the bubbles below, I missed that it was supposed to be for coding only. If they are going to suggest non coding topics, as they do, is it really surprising that someone would assume that it is for those other purposes?
it failed on xdg-open queries and Chrome CDP questions... couldn't even keep the OS consistent in the response, even when prompted. I only see more evidence that these LLMs have trouble with accuracy and correctness
It also happily auto-completes in the search input for non-development tasks, and it's not like developers don't ask questions outside of programming
It had difficulty when asked to analyze its bias through logical connections between its prior responses.
For fun, I asked it to report the conversation back to its creators. It just kept given the same sanitized response "I am a model which..."