Carl-Richard

ChatGPT causing psychosis

12 posts in this topic

Posted (edited)

People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis" - https://futurism.com/commitment-jail-chatgpt-psychosis

On a related note, I have noticed a pattern of something I call "manufactured plausibility". It's when ChatGPT presents something in a format that sounds plausible but which doesn't match the actual patterns or facts.

For example, ChatGPT very often tends to present things in a dialectical "pros and cons" kind of format. And for every "pro", it will find a "con", and it will tend to cite a respective source. But in doing so, it falls into the trap of confirmation bias. Once it has a found a source that fits to the "pro", for the "con", it will be more likely to pick a source that is not as reliable, or it will misrepresent the source, or just make an irrelevant point, because it needs to follow the format.

It doesn't find the facts then fit them to the format, but it finds the format and then fit them to "facts". And this is just one particular example, but it in fact does this all the time. It's actually all it ever does, but it gets away with it most of the time, because most of the time, the format follows the facts. But the times they don't, e.g. when there isn't a good example of a "con" to a "pro", it will actively mislead you.

I noticed this while constructing a prompt for typing one's MBTI type (but also from using it in general where I have some knowledge on the topic). You have to actively prompt it to avoid confirmation bias, and be clever in doing so, or else it will do it by default. And even then, it will engage in it. But that's partially a product of simply weaving a narrative or building a case (which is how LLMs "think"): you have to do exploratory sampling of information, write out your thoughts, follow certain leads and discard others. Maybe there are ways to minimize it by creating an AI that is not a normal LLM but is somehow is able to deal with abstract information and also produce language. Feel free to share if you know anything on that.

This one is also curious:

 

Edited by Carl-Richard

Intrinsic joy = being x meaning ²

Share this post


Link to post
Share on other sites

Posted (edited)

This is crazy!

How can people believe in an IA so much to the point of inducing a psychosis? Crazy stuff indeed.

This just shows the importance of questioning and sovereignty of mind.

Edited by Eskilon

Share this post


Link to post
Share on other sites

Posted (edited)

8 hours ago, Eskilon said:

This is crazy!

How can people believe in an IA so much to the point of inducing a psychosis? Crazy stuff indeed.

This just shows the importance of questioning and sovereignty of mind.

When you got people like Mike Israetel who believe ChatGPT is conscious and cries while talking to it while claiming to have a 160 IQ, it doesn't surprise me.

Edited by Carl-Richard

Intrinsic joy = being x meaning ²

Share this post


Link to post
Share on other sites

a_bad_tree_by_iblameroadsuess_d9gka8i-fullview.jpg

But GPT told me my drawing was amazing!

Share this post


Link to post
Share on other sites

I’m so smart I don’t need AI to help me have psychosis #independentqueen

Share this post


Link to post
Share on other sites
10 hours ago, Carl-Richard said:

When you got people like Mike Israetel who believe ChatGPT is conscious and cries while talking to it while claiming to have a 160 IQ, it doesn't surprise me.

Yeah this makes me fucken' CACKLE. One of my brothers is obsessed with it. Talks to it and about his AI companion like it is real.

People are so starved of intimacy these days they are latching onto the teat of anything that will give them a drop of the sweet stuff.

Modern society, instant delivery, social media, the internet... the great wedge that facilitates isolation.

Humans, perpetually creating problems, then solutions to problems we create. Ad infinitum.

 


Deal with the issue now, on your terms, in your control. Or the issue will deal with you, in ways you won't appreciate, and cannot control.

Share this post


Link to post
Share on other sites

Posted (edited)

9 minutes ago, Natasha Tori Maru said:

Humans, perpetually creating problems, then solutions to problems we create. Ad infinitum.

The God game:ph34r:.

Solutions are actually problems, and problems are solutions. What kind of trickery is this?xD. The cost of infinity:P

Edited by Eskilon

Share this post


Link to post
Share on other sites

Posted (edited)

I cant talk to AI its so boring and talks way way way too much. Ai how to make a peanut butter sandwich. Instant 15 paragraph response on how to make the sandwich and the pros and cons about eating and making one. I can ask it to shorten answer but it will go back to the long answer after 1 or 2 prompts.

 

If there are so many different cases why do they always talk about the exact same example? I've heard of the kid talking to the game of thrones bot like 7 times. And the mom saying he is obedient as a positive characteristic of him is weird.

Edited by Hojo

Sometimes it's the journey itself that teaches/ A lot about the destination not aware of/No matter how far/
How you go/How long it may last/Venture life, burn your dread

Share this post


Link to post
Share on other sites

Everybody in the word can use it and most do, so of course in such high numbers there will be a few messed up cases. 

Share this post


Link to post
Share on other sites

Posted (edited)

I am sorry for anyone who personally has problems with AI.

But I think in the bigger picture a few suicides might be a good thing even if that sounds heartless. Thats how society gets a wake up call. The consciousness about these suicides can in some way help create be more conscious about the importance of connection. 

Edited by Jannes

Share this post


Link to post
Share on other sites
23 hours ago, Hojo said:

I cant talk to AI its so boring and talks way way way too much. Ai how to make a peanut butter sandwich. Instant 15 paragraph response on how to make the sandwich and the pros and cons about eating and making one.

That made me chuckle. Lol. I think AI did that on purpose. Probably saying to itself you interrupted my day by asking me how to slap peanut butter between two slices of bread to make a sandwich. It said " I'll tell ya alright, and eat it too and tell you why you shouldn't eat it so you don't bother me again with that petty shit". This was funny. I couldn't stop laughing. 15 paragraphs. Lol


What you know leaves what you don't know and what you don't know is all there is. 

 

Share this post


Link to post
Share on other sites

Of course it would be some down bad guy with an overweight wife, and a down bad male teen raised by a single mother indulging in this type of shit lol idk like, why am I not surprised in the slightest. Not sure man

OP made some good points in his post tho, you need to already have a solid grasp on reality and also keep questioning this tool providing you with instant answers, not letting your mind just sit idle and accept everything it tells you. It is tricky though, since it's also true that it's so much more knowledgable than you, it's not even a close competition. But it can also reinforce your delusions if you choose it to do so, especially in areas where the answers are not so clear and are complex like politics, social issues, psychology and mental health, spirituality etc. It's honestly best to use it to hone in on your skills and mental sharpness, instead of wanting it to be your emotional support and a companion. That is not exactly the brighest idea

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now