That’s not weird at all, it’s how LLMs work. They statistically arrive at an answer. You can ask it the same question twice in a row in different windows and get opposite answers. That’s completely normal and expected, and also why you can never be sure if you can trust an answer.
While that's not impossible, what we know of how the technology works (ie very costly training run followed by cheap inference steps) means that's not feasible, given all the possible variations of the question * is_hn_trick_prompt* would have to cover because there's a near infinite variations on how you'd word the prompt. (Eg The first sentence could be reworded to be "A woman and her son are in a car accident. " to "A woman and her son are in the car when they get into a crash.")
This is possible because the doctor is the boy's other parent—his father or, more likely given the surprise, his mother. The riddle plays on the assumption that doctors are typically male, but the doctor in this case is the boy's mother. The twist highlights gender stereotypes, encouraging us to question assumptions about roles in society.
https://chatgpt.com/share/66e3601f-4bec-8009-ac0c-57bfa4f059...