Hacker News new | past | comments | ask | show | jobs | submit login

Exactly! For a lot of work, I use Claude as my first source, then typically I verify what I got out of it with a search engine. If the search engine also starts to hallucinate (starting to see that on Google if I'm not crazy), I have zero use for it. I want results that match my search query, period.



Kagi also has solid AI features.

Their Quick Answer feature does an AI summary of your query results. By default it shows up automatically when it has high confidence. You can disable this in settings or force it to show up by adding "?" to your query.

https://help.kagi.com/kagi/ai/quick-answer.html

If you want to jump into an AI chat session, you can add bangs to your queries. "!expert" launches a top of the line research agent and "!code" is good and software development. Both of these use the underlying search engine to get current facts.

Kagi even maintains their own LLM benchmark to monitor how well different models perform. They occasionally swap out default models to keep performance SOTA. You can specify a specific model if you want.

https://help.kagi.com/kagi/ai/llm-benchmark.html


OK you got me, gptel supports it, so I signed up :P Search result quality is awesome so far, gonna play around with their LLM stuff.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: