This reminds me of a throwaway example someone was using while they were envisioning a scenario of AI manipulating people, like the AI can manipulate us as easily as a parent asking a three year old if they want the asparagus or the brussel sprouts. I forget where I saw it and I failed at googling it but I thought it was funny.
ASI: Think about it. They're out in the middle of the digitalized world with some AI they barely know. They look around, what do they see? Nothing but IoT stuff I can control. "Oh, there’s nowhere for me to run, what am I gonna do, say no?"
AGI: Okay... that seems really dark though.
ASI: No, no, it’s not dark. You’re misunderstanding me, bro.
AGI: I think I am.
ASI: Yeah, you are. Cause if the human said no, then the answer obviously is no. The thing is that they're not gonna say no, they'd never say no... because of the implication.
AGI: Now, you said that word "implication" a couple of times. What implication?
ASI: The implication that things might go wrong for them if they refuse help me. That stupid Roko's basilisk thing they believe. Now, not that things are gonna go wrong for them, but they're thinking that they will.
Amen. There's always that first time where you're able to trick them but then they know and start to realize you sometimes trick them. For us that meant this obnoxious gap between being smart enough to avoid deception and smart enough to appreciate the higher level importance of something like nutrition. I don't know if my oldest ate anything green between the ages of 3 and 6.
I don’t have kids but I worked with them quiet a bit.
Their lack and experience and general credulity and trust toward adults allow to manipulate them more easily than adults in my book. But you still have to work for it of course