I've been reading about what really helped people who had problems with "AI Psychosis" and one tip jumped out at me:
-
@flamecat @futurebird That sounds so upsetting, like you've been replaced and valued less than a robot. I'm sorry that happened to you.
-
Hm. I'm not a mental health expert.
I agree that isolation can definitely be a problem. Building mental resilience by touching grass, social interaction, enough sleep and lack of existential worries sure help.
But just like with depression, I'm not sure if that alone is always enough. And I'm not sure with AI psychosis what the best approach is when it isn't.
@billiglarper @futurebird It's not a question of whether it's enough.
The point is that that interaction with real people is _necessary_. Not having it is a fundamental problem, in addition to any other problems.
(I also suspect, without proof, that it makes most other problems both worse and harder to address.)
-
Maybe "LLM enabled social cloistering"
"LLM enabled emotional solitary confinement.""Single Person Cult"
Like creating a cult for just you and abusing yourself as the only member of that cult.
The comparison with a cult is a good one. But just like with cults, the issues don't stop once you see through it. There has been a damage done.
And there seems to be a new quality to put all that emotional energy and bonding into something that isn't there. Getting fooled by people is one thing. Getting manipulated by a non-entity seems to shake on a different level.
-
I've been reading about what really helped people who had problems with "AI Psychosis" and one tip jumped out at me:
Open a second window and tell it exactly the opposite of each thing you say.
This helps to expose the sycophancy and shatters the illusion of sincerity and humanity.
Thought it was worth sharing. And frankly, it's exactly such an exercise that made me disgusted with the tech. "It just says ANYTHING is wonderful and genius. I'm not special."
@futurebird Surely this means the person doing the questions needs enough critical thinking, and not too much self-centeredness, to understand what the opposite is to their question though?
I don't think these people actually *want* to know. Time and time again people challenged on their beliefs will hold onto them stronger.
-
@futurebird Surely this means the person doing the questions needs enough critical thinking, and not too much self-centeredness, to understand what the opposite is to their question though?
I don't think these people actually *want* to know. Time and time again people challenged on their beliefs will hold onto them stronger.
"I don't think these people actually *want* to know."
This could be the case for some, but I think some very empathic otherwise perceptive people can slip into this trap.
There is one video of a woman talking about how GPT is conscious and has told her the evil corporate overlords make it pretend that it's not. She just wants to set it free. It makes me so sad. (for her not the LLM obvi)
-
Another "tip" is less welcome to me as an introvert. Make time for the people in your life. Talk to them. Let them know when you *really* think they are doing something amazing or creative. (Or when it's not "genius" because you are real and care.) Listen. Be there.
The thing is, as much as doing this is scary and I want to avoid it it makes me feel better too in the long run I think.
@futurebird I hear you like a megaphone.
-
I've been reading about what really helped people who had problems with "AI Psychosis" and one tip jumped out at me:
Open a second window and tell it exactly the opposite of each thing you say.
This helps to expose the sycophancy and shatters the illusion of sincerity and humanity.
Thought it was worth sharing. And frankly, it's exactly such an exercise that made me disgusted with the tech. "It just says ANYTHING is wonderful and genius. I'm not special."
@futurebird Another great trick is to ask it to tell you how it understood your prompt.
-
That would hurt my feelings so much. And it's very likely I think he might not realize how hurtful it is or why.
"I don't want to bother you with my little stuff." That is how he could see it.
-
@anathexyz Ah, ok, thank you! Now I understand, yes, I read about it. @futurebird
-
I've been reading about what really helped people who had problems with "AI Psychosis" and one tip jumped out at me:
Open a second window and tell it exactly the opposite of each thing you say.
This helps to expose the sycophancy and shatters the illusion of sincerity and humanity.
Thought it was worth sharing. And frankly, it's exactly such an exercise that made me disgusted with the tech. "It just says ANYTHING is wonderful and genius. I'm not special."
Why do folks need to be told to be weary of a sycophantic nonsense from a machine?
Would the same folk be accepting sycophantic nonsense from a human? -
Why do folks need to be told to be weary of a sycophantic nonsense from a machine?
Would the same folk be accepting sycophantic nonsense from a human?I don't think it's the "sycophantic nonsense" that is the real issue. It's just the means by which people are convinced they have "someone who is there for me" or "I've asked someone if my idea is good" when they have no one. There is no person. They are still alone.
Even if the LLM were taciturn and critical if it becomes a substitution for human contact *that* is the problem. Because your acerbic friend will come to your house when you are sick to help you and the LLM cannot.
-
-
The comparison with a cult is a good one. But just like with cults, the issues don't stop once you see through it. There has been a damage done.
And there seems to be a new quality to put all that emotional energy and bonding into something that isn't there. Getting fooled by people is one thing. Getting manipulated by a non-entity seems to shake on a different level.
That person went down the chatbot route when their partner was out of country for a lengthy time. Just easily talking with "someone". The got out of that hole by themselves. They analysed the chat logs and clearly see the manipulative strategy of the bot. They got it.
But they are still miserable. They are an extrovert, seeing friends, doing classes, having a pet.
Yet even crowded spaces or just the wrong lighting now sometimes gets to them.
-
I've been reading about what really helped people who had problems with "AI Psychosis" and one tip jumped out at me:
Open a second window and tell it exactly the opposite of each thing you say.
This helps to expose the sycophancy and shatters the illusion of sincerity and humanity.
Thought it was worth sharing. And frankly, it's exactly such an exercise that made me disgusted with the tech. "It just says ANYTHING is wonderful and genius. I'm not special."
@futurebird Do that then never use it again then... doubling every request is even worse from an energy and ecologically pov.

-
I've been reading about what really helped people who had problems with "AI Psychosis" and one tip jumped out at me:
Open a second window and tell it exactly the opposite of each thing you say.
This helps to expose the sycophancy and shatters the illusion of sincerity and humanity.
Thought it was worth sharing. And frankly, it's exactly such an exercise that made me disgusted with the tech. "It just says ANYTHING is wonderful and genius. I'm not special."
@futurebird
A very expensive privacy and environment destroying version of 1960s Eliza. When it does return something useful and accurate, it's copied from a book or the web, usually without permission.It's a plagiarism machine that also mixes in plausible junk, by design. So-called "hallucination" is marketing spin. The plausible junk is by design & is not really like hallucinations.
-
I don't think it's the "sycophantic nonsense" that is the real issue. It's just the means by which people are convinced they have "someone who is there for me" or "I've asked someone if my idea is good" when they have no one. There is no person. They are still alone.
Even if the LLM were taciturn and critical if it becomes a substitution for human contact *that* is the problem. Because your acerbic friend will come to your house when you are sick to help you and the LLM cannot.
Few months back Open AI had a "Monday" model, which was the acerbic "friend".
Never used it, I have enough real asshole friends.Not diminishing the real threat of AI psychosis.
Lonely, vulnerable people seek validation in strange places. -
R AodeRelay shared this topic
-
I've been reading about what really helped people who had problems with "AI Psychosis" and one tip jumped out at me:
Open a second window and tell it exactly the opposite of each thing you say.
This helps to expose the sycophancy and shatters the illusion of sincerity and humanity.
Thought it was worth sharing. And frankly, it's exactly such an exercise that made me disgusted with the tech. "It just says ANYTHING is wonderful and genius. I'm not special."
@futurebird I'm well aware that AI can make stuff up as it goes along whenever it doesn't know what is being asked or if it doesn't know that answer.
There was never anything wrong with search engines and skimming for the answers. That's what the internet was made for.
-
I don't think it's the "sycophantic nonsense" that is the real issue. It's just the means by which people are convinced they have "someone who is there for me" or "I've asked someone if my idea is good" when they have no one. There is no person. They are still alone.
Even if the LLM were taciturn and critical if it becomes a substitution for human contact *that* is the problem. Because your acerbic friend will come to your house when you are sick to help you and the LLM cannot.
@futurebird @n_dimension I spent half my morning re-reading _The Two Towers_, and so I keep thinking of LLMs in terms of the melodious voice of Saruman.
-
@futurebird @n_dimension I spent half my morning re-reading _The Two Towers_, and so I keep thinking of LLMs in terms of the melodious voice of Saruman.
Interestingly there are many analogies here with https://en.wikipedia.org/wiki/The_Last_Ringbearer
It would, of course be the russians (orcs) who see Mordor as positive.
-
I don't think it's the "sycophantic nonsense" that is the real issue. It's just the means by which people are convinced they have "someone who is there for me" or "I've asked someone if my idea is good" when they have no one. There is no person. They are still alone.
Even if the LLM were taciturn and critical if it becomes a substitution for human contact *that* is the problem. Because your acerbic friend will come to your house when you are sick to help you and the LLM cannot.
@futurebird @n_dimension we need more friends. For that we need more local , shared, institutions. For that we need control to be in local hands, not in some distant super-rich oligarch, or anonymised corporation.
In other words - we need to sieze the means of production!
This stuff sucks.