Wait until 'it' will start pushing 'specific' products/affiliates/links to you. Because that day _will_ come. And we do know that LLMs can be easily 'heavily influenced' (Gemini blew the lid on that with the black nazis, etc.). I can easily be trained/modulated to "when drill always go for Black & Decker", "when tissues always suggest Kleenex", etc. And if it doesn's fit your specs, it will suggest "how about you trade Spec_A for 5% less on Product_B?"
One day Sam will have a chat with Jeff and presto, 99% of the links will be high-profit-margin AMZN affiliate links.
We are 100% going to get 'hey remember when LLMs were pure and not explicitly (or more dangerously: subtly) recommending things' nostalgia in years to come.
There are parallels to early web here I'm sure of it.
I think I'm a little more worried about AI being subtly influenced in its training data -- they can't explain why they give the tokens they do, and even chain of thought / explain your working thinking is similarly made up and hallucination-prone
Hrm, not push. Nudge, with infinite patience, and with a very deep search tree, playing us all as chess pieces towards the meaning of the Universe and everything. Ad revenue.
I haven't tried shopping with ChatGPT. Is there any good reason to believe we aren't there already? It's not like they're going to program the chatbot to spew out all the money it has gotten for ads, and the recent paper demonstrating how the reasoning the LLM claimed for how it did math bore no resemblance whatsoever to how it actually did the math means it'll be plenty easy for it to give you endless ramblings about how it is just picking the best product when in fact it's just following its RHLF-trained paid brand preferences.
Besides, even the mighty power of LLMs and RHLF and all our AI tech probably can't overcome the fact that the input data is already so massaged that even if you did set out to create a hypothetical LLM chatbot that was 100% on the side of the user, and not the person actually running it, you would probably not be able to succeed.
Unfortunately you and or the AI never escape having to make a decision based on probability.
Coming back with a single product choice is probably always risky. Coming back with a pro/con list of choices might be slightly better if the number of choices to return is larger than the manipulated choices. If you look for cordless drills and all the choices are black and decker then it's obvious. That said when it comes to a mix of paid products get its more difficult.
I think the rise of LLMs for tasks like "shopping" is partly due to Gruen transfer. E-commerce sites have become so convoluted that LLMs are now a coping mechanism for years of bad UX.