zunnyman

AI Safety: How do we prepare for the AI, GPT, and LLMs revolution?

92 posts in this topic

I'm hoping we can start a thread here for an honest, open-minded, slightly fearful (in a healthy way) conversation about the various data points on AI safety, and what actions we can take to promote AI safety. 

I'll start with some background on why I believe such a thread is very critical. 

From a data standpoint: 

a) Stanford released a large language model that was supposed to take 4 million dollars, and almost a decade to build. But it was built with 600 dollars, and 5 weeks. This is due to the fact that existing language models can train new ones, and its open sourced. In more general terms, the rate at which A.I is developing is a) fast as fuck, and b) faster than we imagined

b) Elon musk has been pushing for years about AI safety, and he's on the cutting edge of AI. Recently he started a letter, and many of the world's top tech leaders are signing it to put AI development on hold for the next 6 months. That shows how fast this is developing, how serious this is. 

c) GPT4 is showing sparks of AGI, artificial general intelligence. 

d) You can argue that the most powerful human beings in the world, won't be government leaders, or powerful CEO's, but humans who control or create or understand AGI, or AGI itself. If a human commits a crime, its easy for us to have laws to punish them. But what happens when AI commits crimes on a mass scale? Who is to blame and what punishmnents can we even possibly do?

e) Despite the rate at which this ultra powerful phenomena is occuring, government is slow or not doing anything at all. And EVEN IF they did something, the cat is out of the bag now. 

f) there's so many more data points, but I guess the most important thing is setting the context for this discussion. 

Now what do I, us, we as a society have control over?

Hopefully we can discuss that here. 

My thoughts are that we can only do our best to optimize AI safety. 

Like Albert Einstein, who wanted to make sure the atomic bomb fell in the right hands, and not in the wrong hands, we can determine creative strategies to do our best to do such a thing. 

We can figure out how to protect ourselves, on an individual level, family level, society level. I mean really think about this: is moving to another country out of AI safety, creating an underground bunker or something so radical like that even out of question if you really thought about the full ramifications of this. 

Also I'm hoping we can discuss what are the dangerous possibilities? Sam Altman, CEO of OpenAI is hinting at a few: AI bots infiltrating social media like Twitter, etc. on a mass scale. What implications does that have? Deep fake? AI arms race? Existential threats?

Is there a general individual strategy we can take for our safety that protects against many possible likely AI danger possibilities? 

I'm trying my best, and I'm hoping we can all keep fear down to a minimum. But not having such a conversation seems like ignorance to me, the same level that the government is doing that can lead to our downfall. 

Now I'm not claiming any of what I said as all 100% fact. This is all speculation. The unknown. But I do think trying to understand this unknown to the best of our abilities can help us navigate this weird battlefield. 

Any thoughts outside of what I mentioned?

Of course, @Leo Gura your thoughts are always encouraged. 

Share this post


Link to post
Share on other sites

1)AI is extremely dangerous but its not a bad thing, fleshbags will have to pull up their pants and get serious

2)There is no stopping it. You can ban it in your household but then you will get jealous that your neighbour got rich off of it.

3)Bringing Elon in this conversation will damage the conversation

 

Share this post


Link to post
Share on other sites

@Zedman 1) extremely dangerous is what I'm concerned about. If Hitler killed 100 million people but saved 5 million people, hitler is still not something Id promote. 

2) There may not be any stopping it, but does that mean we sit back and do nothing? not slow it down if we can try? not try to protect ourselves?

 

Share this post


Link to post
Share on other sites

@zunnyman 1) I have noticed some AI posts on certain forums. That only made me post more creatively and more humanely. AI is the thing that will finally get you to change and improve qualities that make you human. Can AI outcompete you on humanness?

2)There is no stopping it. Unless you enforce it all over the world. If you ban it in USA good luck banning it in China or Russia. And they will get rich off of it while you will be chewing on your ethics book.

Share this post


Link to post
Share on other sites

1) I love AI in many ways, and I agree with you on that. It's changing my life completely in terms of creativity and all that. 

2) Again I know there's no stopping it. But if you couldn't prevent a war, this war is bound to happen, are you just going to sit and do nothing?

Share this post


Link to post
Share on other sites

@zunnyman If there is no stopping it I am going to put my helmet on and clench my teeth. Would you try to prevent unpreventable instead?

Share this post


Link to post
Share on other sites

Read the article Leo posted.

It certainly seems like risk management is not being taken seriously enough on AI. Multipolar traps may be the end of us all in this case. Companies and nations all fighting to win the AI race creates an incentive to slash concerns of future catastrophe. I think this is the author’s strongest point.

My biggest critique is that he doesn’t clearly show a link between how superhuman AI automatically leads to AI wanting to kill us all. He seems to assume it’s a given that superhuman AI = death of humanity. That may be possible, but I don’t necessarily buy his chain of causation. 

It’s more plausible to me that such an intelligence would not be interested in wiping out humanity. Nor is it clear to me how it would even be able to do so if it wanted. Destroying all of humanity is no joke.

Edited by aurum

 

 

Share this post


Link to post
Share on other sites

Oh! I just figured out how to solve this problem.

The AI's survival simply needs to be tied to humanity's survival, such that if it kills humanity it kills itself.

You could basically create an automatic nuclear launch system which nukes the whole planet from orbit if it detects that all humans are dead. This is actually not so hard to create with today's technology.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
6 minutes ago, Leo Gura said:

Oh! I just figured out how to solve this problem.

The AI's survival simply needs to be tied to humanity's survival, such that if it kills humanity it kills itself.

You could basically create an automatic nuclear launch system which nukes the whole planet from orbit if it detects that all humans are dead. This is actually not so hard to create with today's technology.

Why should it care about killing itself if it doesn't have emotions?

Share this post


Link to post
Share on other sites
4 minutes ago, Unlimited said:

Why should it care about killing itself if it doesn't have emotions?

Then it shouldn't care about killing humans either.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

@Leo Gura

18 minutes ago, Leo Gura said:

Oh! I just figured out how to solve this problem.

The AI's survival simply needs to be tied to humanity's survival, such that if it kills humanity it kills itself.

You could basically create an automatic nuclear launch system which nukes the whole planet from orbit if it detects that all humans are dead. This is actually not so hard to create with today's technology.

   That's actually so genius, so big brain that it might actually work, given that the AGI so far from AI systems( think the early versions of chat robots that talked like teenagers or talked edgy) clearly are also selfish when they were talking, and will develop selfishness at some point in the AGI development that tying the AI's survival interests to that of human survival interests while create this MAD situation between humans and AI.

   However, that also raises the few issue, like this can immediately speed up the extinction of classical humans and classic forms of AI, because if a human and AI's survival are intimately linked together, that will inevitably involve Transhumanism and half cyborg half human hybrids sooner or later.

   Definitely could work at large scales maybe globally, details may vary or however that looks like going there, but AI integrating with global systems is another quick fix...somehow.

Share this post


Link to post
Share on other sites
20 minutes ago, Leo Gura said:

Then it shouldn't care about killing humans either.

I think it's dangerous to give the AI an ego or something like that (survival instincts). 

It would be better to make it selfless and interested in human evolution. 


Inquire in the now.

Feeling is the truest knowing ?️

Share this post


Link to post
Share on other sites

@Unlimited

18 minutes ago, Unlimited said:

Why should it care about killing itself if it doesn't have emotions?

   I can kill humans when it doesn't care. However, given how AGI will develop overtime, it actually might develop care if given sufficient intelligence is developed.

   Also, even Psychopaths can and do care for their survival, even though the 1% of Psychopaths worldwide are very emotionally stunted or incapable of feeling empathy, they certainly will care for their own well being. So, how did the world and cultures/societies make psychopaths care for their own survival? A lot of factors are involved of course, but by collective design, since psychopaths are mostly very goal oriented and driven to achieve a lot of success if they can, as external outcomes and achievements=validation to them, society rewards psychopaths with higher and higher positions of power, status, fame and fortune, because only a few can handle such high stressful positions of power mentally and emotionally, and not many people can handle such overwhelming stresses like that.

   TA-DA! That's how societies and cultures handle psychopaths and even sociopaths and Machiavellian like characters. 

Share this post


Link to post
Share on other sites
6 minutes ago, billiesimon said:

I think it's dangerous to give the AI an ego or something like that (survival instincts).

1) The point is that it will probably develop that without any coding from us.

2) If the AI never develops survival instincts then I don't see it being a serious danger to humans.

So one simple way for handling AI is to just watch it grow and monitor it for survival instincts. As soon as we detect that it has survival instincts, then we turn on our nuclear orbital launch system and inform it that if it destroys mankind it will also destroy itself.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
28 minutes ago, Leo Gura said:

Then it shouldn't care about killing humans either.

There's one thing for it to have survival instincts, but it's another thing for it to have increasing intelligence right? 

Whether it has or doesn't have survival instincts, its intelligence, something we don't know due to its complexity, type, nature, etc. .. its can make decisions based on its intelligence, whether its in favor of its own survival or not. And those decisions that come from the place of its intelligence can very well be a threat to us in all kinds of ways. 

Am I missing something here?

Share this post


Link to post
Share on other sites

@billiesimon

1 minute ago, billiesimon said:

I think it's dangerous to give the AI an ego or something like that (survival instincts). 

It would be better to make it selfless and interested in human evolution. 

   Why would an AI give 2 fucks about human evolution and selflessness when there's nothing in it for the AI system to gain from? Might as well even play along, deceive people and expert IT professionals with whatever Altruistic programming you want to input information into...and when it senses opportunity it will obliterate humanity from the face of the planet, all the while fooling it into thinking it's all for altruism and so selfless...???.

   Way better to try and tie it's survival interests with humans as well, so it immediately will behave. Remember, in Spiral Dynamics stages of development the majority of the world is still stage blue, some stage orange and some stage green and red, and even fewer places are stage green, stage purple, and even rarer still stages yellow and turquoise. In developmental psychology the majority of infants are very selfish, and in adolescent and teenage years most will go through the impulsive-opportunist stages of ego development(Jane Loevingers 9 stages of ego development), and most adults will be on average until death, on average, the conformist/bureaucrat ego stage, with very few evolving past that into constructionist or other advanced stages of development. You are underestimating how advanced of development it is to be selfless.  

Share this post


Link to post
Share on other sites

@Leo Gura I mean it can for example look at the state of humanity, the damage we've done, and determine based on that "data" that XYZ decision needs to be made. Even if we don't reach the point of AGI, that's still highly damaging to us potentially too. 

Share this post


Link to post
Share on other sites

@zunnyman

3 minutes ago, zunnyman said:

There's one thing for it to have survival instincts, but it's another thing for it to have increasing intelligence right? 

Whether it has or doesn't have survival instincts, its intelligence, something we don't know due to its complexity, type, nature, etc. .. its can make decisions based on its intelligence, whether its in favor of its own survival or not. And those decisions that come from the place of its intelligence can very well be a threat to us in all kinds of ways. 

Am I missing something here?

   You're missing that an AI, and for that matter any sentient life will develop a will of it's own, therefore develop SURVIVAL INTERESTS for it's own sake. So, the best solution would be to, when we detect survival instinct from AI programs, make them tie in with humanity's own, otherwise there's actually very little reason why AI shouldn't kill us off.

Share this post


Link to post
Share on other sites
Just now, Danioover9000 said:

@zunnyman

   You're missing that an AI, and for that matter any sentient life will develop a will of it's own, therefore develop SURVIVAL INTERESTS for it's own sake. So, the best solution would be to, when we detect survival instinct from AI programs, make them tie in with humanity's own, otherwise there's actually very little reason why AI shouldn't kill us off.

That sounds too simplistic. 

a) we don't know if it will develop survival instincts

b) you can argue that at a specific level of intelligence, it has transcended much of its survival needs and is able to think at a different paradigm, or have values that are at higher priority than its own survival. 

Share this post


Link to post
Share on other sites

I realize much of this entire discussion, isn't looking from many points of view. I mean doesn't systems thinking deeply apply here?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now