I've been reading about what really helped people who had problems with "AI Psychosis" and one tip jumped out at me:
-
@futurebird I have only experimented with ChatGPT once but... Same. If it felt more nitpicky and less emphatic, like a university professor, I'd feel more suspicious it was intelligent. I *know* I'm not right all the time. I *want* to be corrected. Sycophancy creeps me out.
"Sycophancy creeps me out."
It's very creepy. The only people who have talked to me with that much positivity and agreeableness *ever* in my life were the worst sort of men who wanted to sleep with me in my 20s. I have a deep visceral negative reaction to that kind of consistent flattery.
It makes my skin crawl.
-
I've been reading about what really helped people who had problems with "AI Psychosis" and one tip jumped out at me:
Open a second window and tell it exactly the opposite of each thing you say.
This helps to expose the sycophancy and shatters the illusion of sincerity and humanity.
Thought it was worth sharing. And frankly, it's exactly such an exercise that made me disgusted with the tech. "It just says ANYTHING is wonderful and genius. I'm not special."
@futurebird Have you seen Eddy Burback's video where he tells ChatGPT he was the smartest baby of 1996 and then just plays along as it suggests he has psychic powers etc. too? It's pretty funny/horrifying. https://m.youtube.com/watch?v=VRjgNgJms3Q
-
I've been reading about what really helped people who had problems with "AI Psychosis" and one tip jumped out at me:
Open a second window and tell it exactly the opposite of each thing you say.
This helps to expose the sycophancy and shatters the illusion of sincerity and humanity.
Thought it was worth sharing. And frankly, it's exactly such an exercise that made me disgusted with the tech. "It just says ANYTHING is wonderful and genius. I'm not special."
@futurebird Brave. I’ve never tried a query. AI does require wading through search results for documents. On desktop & phone OS it’s just another layer between me and files & photos.
-
@futurebird I just asked Claude what it thinks about our half-project report:
> Please play the role of an evaluator in the Innosuisse grant system. Write what you think when reading the report: are you convinced the project is on a good track? Do you agree that the project should be continued? What are dark spots where you think you would need more information in order to decide on a go/no-go?
It's answer was very direct and very critical
But really useful.Yeah, but asking it to change breaks the veil that makes "AI psychosis" dangerous to some degree.
The issue is that people get the feeling there is a thinking being in the machine and allow it to satisfy critical emotional needs for human connection that we all have. The program takes up space and time that could go to real people in their lives.
It's emotional empty calories. Food without real sustenance and if that dominates your diet you will get sick.
-
But why is it so fulfilling to have a good back and forth with someone? To disagree and pull the whole problem apart and ideally come out on top? (though it's also fun to discover you needed to learn something too, it's just less fun and rewarding)
It's fulfilling because they care about what you are saying enough to criticize it. The difference between the art teacher who says "that's a very nice drawing" and "I can see that you are trying to do X but it's failing/working in these ways."
@futurebird Huh! I also find a good back and forth fulfilling, but I think it's more exciting when I'm interestingly wrong.
-
Yeah, but asking it to change breaks the veil that makes "AI psychosis" dangerous to some degree.
The issue is that people get the feeling there is a thinking being in the machine and allow it to satisfy critical emotional needs for human connection that we all have. The program takes up space and time that could go to real people in their lives.
It's emotional empty calories. Food without real sustenance and if that dominates your diet you will get sick.
"I don't need to eat anything. I just looked at this photo of a meal and now I feel full. It was delicious. I didn't even need to cook or go out to get it. So expedient."
And then slowly they starve.
-
@futurebird Huh! I also find a good back and forth fulfilling, but I think it's more exciting when I'm interestingly wrong.
I'm trying to cultivate that perspective. But I do really love to be right. Probably too much.
-
Yeah, but asking it to change breaks the veil that makes "AI psychosis" dangerous to some degree.
The issue is that people get the feeling there is a thinking being in the machine and allow it to satisfy critical emotional needs for human connection that we all have. The program takes up space and time that could go to real people in their lives.
It's emotional empty calories. Food without real sustenance and if that dominates your diet you will get sick.
@futurebird @ligasser If you doubt one AI just ask the same questions of a different one, I use chatgpt alot BUT I also play devils advocate by squirting the same questions demands evidence to gemini. Phrasing the questions are highly important.
-
"I don't need to eat anything. I just looked at this photo of a meal and now I feel full. It was delicious. I didn't even need to cook or go out to get it. So expedient."
And then slowly they starve.
This can be very dangerous for people who think "I don't really ever need to talk to anyone about my feelings."
This isn't true, it's just their needs are minimal.
"Feeling down."
"ya"That's two letters but getting such a response can make you feel so much better. It represents someone, should things get worse, who might come over and help you.
A chatbot can say "ya" too. But, it doesn't make you feel better... **unless** you think it's a person. That's the danger.
-
"I don't need to eat anything. I just looked at this photo of a meal and now I feel full. It was delicious. I didn't even need to cook or go out to get it. So expedient."
And then slowly they starve.
@futurebird @ligasser do you know about Julodimorpha bakewelli? the beetle finds a brand of beer bottle so attractive that it will ignore other beetles and even predators, endangering the species. https://www.australiangeographic.com.au/news/2011/11/nature-mimics-why-bugs-mate-with-beer-bottles/
-
Frankly, I'm kind of glad these GPTs were so sycophantic. A more critical voice might have been more appealing to me. A contrarian bot who always nitpicks and argues with you.
That's how facebook's old 2016 algorithm wasted so much of my time. I sucked in by the opportunity to dismantle someone who is wrong. Not the most ... healthy personal quality. I'm working on it always.
The easily picked apart rage bait kept me there for far too long.
-
The easily picked apart rage bait kept me there for far too long.
Yup. I don't like to admit how well that worked on me.
Show me someone causally but confidently wrong with pretensions being an intellectual and I'm so excited to get in the ring and start proving them wrong.
Facebook could find such posts extremely efficiently. These were posts from real people I didn't know (who weren't even talking to me.) They would be served up on my dashboard because I'd type a response.
Now it might not even be a person.
-
@futurebird @ligasser If you doubt one AI just ask the same questions of a different one, I use chatgpt alot BUT I also play devils advocate by squirting the same questions demands evidence to gemini. Phrasing the questions are highly important.
Very interesting …
You can quickly tell how dumbass your preferred AI slop provider is but asking it to write a 5000 word essay in support of your thesis. And then do the same for the antithesis. Two mutually exclusive, well-formed and seemingly well-reasoned arguments.
-
@futurebird I have only experimented with ChatGPT once but... Same. If it felt more nitpicky and less emphatic, like a university professor, I'd feel more suspicious it was intelligent. I *know* I'm not right all the time. I *want* to be corrected. Sycophancy creeps me out.
There’s an old Andre Previn joke where a comedian is playing the piano badly in his orchestra. He accuses the comedian of playing all the wrong notes. The comedian replies. That he is playing all the right notes. But in the wrong order.
That sums up AI for me. We have no way of knowing whether it has organised the facts into the right order.
-
"I don't need to eat anything. I just looked at this photo of a meal and now I feel full. It was delicious. I didn't even need to cook or go out to get it. So expedient."
And then slowly they starve.
@futurebird OK, I can agree with that. We do need more human interactions. At least I see my kids going in that direction, which is nice!
What I like about the LLMs is the possibility to give higher quality documents for review, because the low hanging fruits are already culled. But we should definitely profit from all the free time we get!
Who said in the 60s that we'll only be working like 2 days a week?
-
This can be very dangerous for people who think "I don't really ever need to talk to anyone about my feelings."
This isn't true, it's just their needs are minimal.
"Feeling down."
"ya"That's two letters but getting such a response can make you feel so much better. It represents someone, should things get worse, who might come over and help you.
A chatbot can say "ya" too. But, it doesn't make you feel better... **unless** you think it's a person. That's the danger.
@futurebird Let's hope that people still will want to see other people

<sarcasm>Or, less nice: natural selection will take care of that?</sarcasm>
-
Frankly, I'm kind of glad these GPTs were so sycophantic. A more critical voice might have been more appealing to me. A contrarian bot who always nitpicks and argues with you.
That's how facebook's old 2016 algorithm wasted so much of my time. I sucked in by the opportunity to dismantle someone who is wrong. Not the most ... healthy personal quality. I'm working on it always.
@futurebird Fuck... Thinking about it, I would hate a contrarian bot but I *might* become addicted to it. Or at least caught up in it sometimes. That's what Twitter was, right?
I'm pretty hedonistic. Sycophancy is just overdue recognition for me, but it's *cheap* for a bot to be a sycophant. It's just words, which are free. I can do that myself in my head. If a pretty girl were telling me I'm lovely, at least she's using time she could otherwise be streaming on Twitch and earning money! Value!
-
Another "tip" is less welcome to me as an introvert. Make time for the people in your life. Talk to them. Let them know when you *really* think they are doing something amazing or creative. (Or when it's not "genius" because you are real and care.) Listen. Be there.
The thing is, as much as doing this is scary and I want to avoid it it makes me feel better too in the long run I think.
@futurebird Not to be a peddler of black pills here, but the concepts of sycophancy, yes-people, insincerity, manipulative behavior, etc. etc. of course all predate LLM-based chat-bots.
There is a deeper abyss waiting behind the rather shallow one (the danger of mistaking a chatbot for a person (or a distinct entity at all)), and that is taking this experiment that you can actually conduct (A/B-testing two instances of the same chat-bot with different inputs) to a thought experiment of being able to do the same experiment with actual people and drawing extreme conclusions from it. -
But why is it so fulfilling to have a good back and forth with someone? To disagree and pull the whole problem apart and ideally come out on top? (though it's also fun to discover you needed to learn something too, it's just less fun and rewarding)
It's fulfilling because they care about what you are saying enough to criticize it. The difference between the art teacher who says "that's a very nice drawing" and "I can see that you are trying to do X but it's failing/working in these ways."
@futurebird Perhaps I mentioned this before, not sure.
I was member of a Toronto-based forum with global reach from 1999 till 2010 when it stopped. About old Canadian/Celtic stories. And just random off-topic. About 70% female. Plenty with Asian roots. Average IT-level was far above mine. The game self-stalking and double-googling was played sometimes. "Try to find me somewhere else " and "google once and google twice for the opposite¨>. A lot of fun with that , and the instinct was woken up. -
@futurebird OK, I can agree with that. We do need more human interactions. At least I see my kids going in that direction, which is nice!
What I like about the LLMs is the possibility to give higher quality documents for review, because the low hanging fruits are already culled. But we should definitely profit from all the free time we get!
Who said in the 60s that we'll only be working like 2 days a week?
@ligasser @futurebird If you think it's high quality I shudder to think what you were seeing before.