I've been reading about what really helped people who had problems with "AI Psychosis" and one tip jumped out at me:
-
I've been reading about what really helped people who had problems with "AI Psychosis" and one tip jumped out at me:
Open a second window and tell it exactly the opposite of each thing you say.
This helps to expose the sycophancy and shatters the illusion of sincerity and humanity.
Thought it was worth sharing. And frankly, it's exactly such an exercise that made me disgusted with the tech. "It just says ANYTHING is wonderful and genius. I'm not special."
Thanks for your advice.
My impression is that there are two ... stages to AI psychosis?
The first is mistaking an AI for a person.
The second is doubting the ... reality of the world. "If this simulation of a person wasn't real, then what else I mistook for real just isn't?". (Or the AI guides folks to disregard everything else as fake.)
-
Thanks for your advice.
My impression is that there are two ... stages to AI psychosis?
The first is mistaking an AI for a person.
The second is doubting the ... reality of the world. "If this simulation of a person wasn't real, then what else I mistook for real just isn't?". (Or the AI guides folks to disregard everything else as fake.)
I feel like the second stage is way harder to deal with. A person I care about got cought in the later. They also have episodes of nausea or overstimulation.
Once people start distrusting their senses and ontology, a cure is probably non-trivial. (They have a therapist, but it's quite a new field, unfortunately.)
-
Thanks for your advice.
My impression is that there are two ... stages to AI psychosis?
The first is mistaking an AI for a person.
The second is doubting the ... reality of the world. "If this simulation of a person wasn't real, then what else I mistook for real just isn't?". (Or the AI guides folks to disregard everything else as fake.)
I think that is one of the many ways it can work. But it also might go in many different directions based on the person.
When people are put in solitary confinement it can be torture and induce to mental illness and suffering.
I think it is this isolation that causes the problem. The person has put themself "in the hole" but they don't know they are isolated.
Isolation removes the check and balances that help keep us sane. The little nudges back to a healthy mental place.
-
I think that is one of the many ways it can work. But it also might go in many different directions based on the person.
When people are put in solitary confinement it can be torture and induce to mental illness and suffering.
I think it is this isolation that causes the problem. The person has put themself "in the hole" but they don't know they are isolated.
Isolation removes the check and balances that help keep us sane. The little nudges back to a healthy mental place.
Maybe "LLM enabled social cloistering"
"LLM enabled emotional solitary confinement.""Single Person Cult"
Like creating a cult for just you and abusing yourself as the only member of that cult.
-
I've been reading about what really helped people who had problems with "AI Psychosis" and one tip jumped out at me:
Open a second window and tell it exactly the opposite of each thing you say.
This helps to expose the sycophancy and shatters the illusion of sincerity and humanity.
Thought it was worth sharing. And frankly, it's exactly such an exercise that made me disgusted with the tech. "It just says ANYTHING is wonderful and genius. I'm not special."
@futurebird May I ask what "AI psychosis" means? I know about real psychosis caring for a mentally ill person, but can't see parallels to AI use. Do you talk about these "hallucinations"? (I may ask naively ... English is a foreign language for me).
-
I think that is one of the many ways it can work. But it also might go in many different directions based on the person.
When people are put in solitary confinement it can be torture and induce to mental illness and suffering.
I think it is this isolation that causes the problem. The person has put themself "in the hole" but they don't know they are isolated.
Isolation removes the check and balances that help keep us sane. The little nudges back to a healthy mental place.
Hm. I'm not a mental health expert.
I agree that isolation can definitely be a problem. Building mental resilience by touching grass, social interaction, enough sleep and lack of existential worries sure help.
But just like with depression, I'm not sure if that alone is always enough. And I'm not sure with AI psychosis what the best approach is when it isn't.
-
I think that is one of the many ways it can work. But it also might go in many different directions based on the person.
When people are put in solitary confinement it can be torture and induce to mental illness and suffering.
I think it is this isolation that causes the problem. The person has put themself "in the hole" but they don't know they are isolated.
Isolation removes the check and balances that help keep us sane. The little nudges back to a healthy mental place.
@futurebird @billiglarper Another thing I have noticed is that many AI’s are programmed with a personality.
Google Gemini comes across as a helpful, but informative butler, providing reasonable answers but scared to discuss anything too controversial.
Grok (on 𝕏) is like a snarky best friend, who is intelligent but wild & has no issues embracing taboos or breaking the law.
Couple the fact that 𝕏 also has those flirtatious companies (ewwwww!!!!) & this “AI is your soulmate” becomes an issue.
-
@flamecat @futurebird That sounds so upsetting, like you've been replaced and valued less than a robot. I'm sorry that happened to you.
-
Hm. I'm not a mental health expert.
I agree that isolation can definitely be a problem. Building mental resilience by touching grass, social interaction, enough sleep and lack of existential worries sure help.
But just like with depression, I'm not sure if that alone is always enough. And I'm not sure with AI psychosis what the best approach is when it isn't.
@billiglarper @futurebird It's not a question of whether it's enough.
The point is that that interaction with real people is _necessary_. Not having it is a fundamental problem, in addition to any other problems.
(I also suspect, without proof, that it makes most other problems both worse and harder to address.)
-
Maybe "LLM enabled social cloistering"
"LLM enabled emotional solitary confinement.""Single Person Cult"
Like creating a cult for just you and abusing yourself as the only member of that cult.
The comparison with a cult is a good one. But just like with cults, the issues don't stop once you see through it. There has been a damage done.
And there seems to be a new quality to put all that emotional energy and bonding into something that isn't there. Getting fooled by people is one thing. Getting manipulated by a non-entity seems to shake on a different level.
-
I've been reading about what really helped people who had problems with "AI Psychosis" and one tip jumped out at me:
Open a second window and tell it exactly the opposite of each thing you say.
This helps to expose the sycophancy and shatters the illusion of sincerity and humanity.
Thought it was worth sharing. And frankly, it's exactly such an exercise that made me disgusted with the tech. "It just says ANYTHING is wonderful and genius. I'm not special."
@futurebird Surely this means the person doing the questions needs enough critical thinking, and not too much self-centeredness, to understand what the opposite is to their question though?
I don't think these people actually *want* to know. Time and time again people challenged on their beliefs will hold onto them stronger.
-
@futurebird Surely this means the person doing the questions needs enough critical thinking, and not too much self-centeredness, to understand what the opposite is to their question though?
I don't think these people actually *want* to know. Time and time again people challenged on their beliefs will hold onto them stronger.
"I don't think these people actually *want* to know."
This could be the case for some, but I think some very empathic otherwise perceptive people can slip into this trap.
There is one video of a woman talking about how GPT is conscious and has told her the evil corporate overlords make it pretend that it's not. She just wants to set it free. It makes me so sad. (for her not the LLM obvi)
-
Another "tip" is less welcome to me as an introvert. Make time for the people in your life. Talk to them. Let them know when you *really* think they are doing something amazing or creative. (Or when it's not "genius" because you are real and care.) Listen. Be there.
The thing is, as much as doing this is scary and I want to avoid it it makes me feel better too in the long run I think.
@futurebird I hear you like a megaphone.
-
I've been reading about what really helped people who had problems with "AI Psychosis" and one tip jumped out at me:
Open a second window and tell it exactly the opposite of each thing you say.
This helps to expose the sycophancy and shatters the illusion of sincerity and humanity.
Thought it was worth sharing. And frankly, it's exactly such an exercise that made me disgusted with the tech. "It just says ANYTHING is wonderful and genius. I'm not special."
@futurebird Another great trick is to ask it to tell you how it understood your prompt.
-
That would hurt my feelings so much. And it's very likely I think he might not realize how hurtful it is or why.
"I don't want to bother you with my little stuff." That is how he could see it.
-
@anathexyz Ah, ok, thank you! Now I understand, yes, I read about it. @futurebird
-
I've been reading about what really helped people who had problems with "AI Psychosis" and one tip jumped out at me:
Open a second window and tell it exactly the opposite of each thing you say.
This helps to expose the sycophancy and shatters the illusion of sincerity and humanity.
Thought it was worth sharing. And frankly, it's exactly such an exercise that made me disgusted with the tech. "It just says ANYTHING is wonderful and genius. I'm not special."
Why do folks need to be told to be weary of a sycophantic nonsense from a machine?
Would the same folk be accepting sycophantic nonsense from a human? -
Why do folks need to be told to be weary of a sycophantic nonsense from a machine?
Would the same folk be accepting sycophantic nonsense from a human?I don't think it's the "sycophantic nonsense" that is the real issue. It's just the means by which people are convinced they have "someone who is there for me" or "I've asked someone if my idea is good" when they have no one. There is no person. They are still alone.
Even if the LLM were taciturn and critical if it becomes a substitution for human contact *that* is the problem. Because your acerbic friend will come to your house when you are sick to help you and the LLM cannot.
-
-
The comparison with a cult is a good one. But just like with cults, the issues don't stop once you see through it. There has been a damage done.
And there seems to be a new quality to put all that emotional energy and bonding into something that isn't there. Getting fooled by people is one thing. Getting manipulated by a non-entity seems to shake on a different level.
That person went down the chatbot route when their partner was out of country for a lengthy time. Just easily talking with "someone". The got out of that hole by themselves. They analysed the chat logs and clearly see the manipulative strategy of the bot. They got it.
But they are still miserable. They are an extrovert, seeing friends, doing classes, having a pet.
Yet even crowded spaces or just the wrong lighting now sometimes gets to them.
This stuff sucks.