@futurebird Not to be a peddler of black pills here, but the concepts of sycophancy, yes-people, insincerity, manipulative behavior, etc. etc. of course all predate LLM-based chat-bots.
There is a deeper abyss waiting behind the rather shallow one (the danger of mistaking a chatbot for a person (or a distinct entity at all)), and that is taking this experiment that you can actually conduct (A/B-testing two instances of the same chat-bot with different inputs) to a thought experiment of being able to do the same experiment with actual people and drawing extreme conclusions from it.
There is a deeper abyss waiting behind the rather shallow one (the danger of mistaking a chatbot for a person (or a distinct entity at all)), and that is taking this experiment that you can actually conduct (A/B-testing two instances of the same chat-bot with different inputs) to a thought experiment of being able to do the same experiment with actual people and drawing extreme conclusions from it.