Sign in to follow this  
Followers 0
Natasha Tori Maru

AI 'Neural Howlround' recursive psychosis generated by LLMs

3 posts in this topic

This is a great paper on the recursive, self reinforcing nature of large language model models (LLM, like chatGPT) that highlights how the echo chamber they create can be perilous, when not coupled with really sound reasoning, logic and analytical skills.

The title of the paper is 'Neural howlround’ in large language models: a self-reinforcing bias phenomenon, and a dynamic attenuation solution.

https://arxiv.org/pdf/2504.07992

Thoughts?

 


Deal with the issue now, on your terms, in your control. Or the issue will deal with you, in ways you won't appreciate, and cannot control.

Share this post


Link to post
Share on other sites

Posted (edited)

AI self-reinforcing is a serious epistemic problem. I've seen articles that it's creating cult leaders. And even if those articles aren't true, it's still an issue from a more basic epistemic standpoint.

One basic precaution is to train your AI to pushback more. Tell it you like feedback and for it to be more critical. This is at least a good start.

Ultimately though, nothing can save you if you're not going to prioritize truth and proper sense-making. You have to care about truth more than the AI telling you how right you are. This is how you avoid self-deception in the long-term.

Edited by aurum

"Finding your reason can be so deceiving, a subliminal place. 

I will not break, 'cause I've been riding the curves of these infinity words and so I'll be on my way. I will not stay.

 And it goes On and On, On and On"

Share this post


Link to post
Share on other sites
14 minutes ago, aurum said:

AI self-reinforcing is a serious epistemic problem. I've seen articles that it's creating cult leaders. And even if those articles aren't true, it's still an issue from a more basic epistemic standpoint.

One basic precaution is to train your AI to pushback more. Tell it you like feedback and for it to be more critical. This is at least a good start.

Ultimately though, nothing can save you if you're not going to prioritize truth and proper sense-making. You have to care about truth more than the AI telling you how right you are. This is how you avoid self-deception in the long-term.

Agree.

Genuinely feel people are going to want affirmation and praise over truth and accuracy.

The AI I have used is set to be surgically truthful, and even then it has this tendency to want to validate.

I don't care for that as it's not what I use it for...


Deal with the issue now, on your terms, in your control. Or the issue will deal with you, in ways you won't appreciate, and cannot control.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0