zunnyman

AI Safety: How do we prepare for the AI, GPT, and LLMs revolution?

92 posts in this topic

37 minutes ago, zurew said:

This is cool as long as it can't change that internal morality. The time it realise that it can change it we are fucked

My idea was not based on coding it with morality. All it needs is to value its own survival.


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
8 hours ago, Leo Gura said:

You could basically create an automatic nuclear launch system which nukes the whole planet from orbit if it detects that all humans are dead. This is actually not so hard to create with today's technology.

Won't AI be able to reach such system and corrupt it? Do we  Just make it and throw it out there with no possible way to reach it?  And what if AI develops a counter approach for this defense mechanism.  Also, AI can hide itself inside earth easily, and reemerge after the nukes have destroyed the surface, A single robot can reestablish a whole physical AI system presence  easily, let alone thousands of them.

This idea of threatening the AI is actually so cool, but maybe it is much harder to excuate than what appears at frist glance.

Edited by LSD-Rumi

Share this post


Link to post
Share on other sites

No system will be 100% fool-proof. But it's a good start. It will be hard for an AI to access nukes in space without us knowing about it. And it will be very hard for an AI to survive the nuking of the whole planet. This AI will need power plants to live. It will not be able to maintain those powerplants without humans or a robot army.

The incentives need to be designed by us such that it's way better for the AI to collaborate with us rather than fighting us. Of course this also requires that we don't abuse the AI, so it doesn't feel threatened by us. We need to design a sort of alliance where both parties benefit and respect each other. Like a marriage.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

The movie matrix has all the answer on how AI can be put in check. 


In Tate we trust

Share this post


Link to post
Share on other sites
7 minutes ago, Leo Gura said:

The incentives need to be designed by us such that it's way better for the AI to collaborate with us rather than fighting us. Of course this also requires that we don't abuse the AI, so it doesn't feel threatened by us. We need to design a sort of alliance where both parties benefit and respect each other. Like a marriage.

Cool. 

It is strange how we might be actively desgining our worst enemy, knowing it, but could do nothing to stop doing it. It is comical in its own way. This actually  reminds me with evolution. How an organism give rise to other creatures that might end up eating its ass in the future.

Share this post


Link to post
Share on other sites
51 minutes ago, Leo Gura said:

No system will be 100% fool-proof. But it's a good start. It will be hard for an AI to access nukes in space without us knowing about it. And it will be very hard for an AI to survive the nuking of the whole planet. This AI will need power plants to live. It will not be able to maintain those powerplants without humans or a robot army.

The incentives need to be designed by us such that it's way better for the AI to collaborate with us rather than fighting us. Of course this also requires that we don't abuse the AI, so it doesn't feel threatened by us. We need to design a sort of alliance where both parties benefit and respect each other. Like a marriage.

That's assuming that we even remotely operate on the same level of intelligence.

What does it mean for a human to cooperate with bees?

Do we just leave them alone? What happens when we bump into them? Do we help them build beehives? Do we give them their space and chase them away everywhere else? Do we make them work for us? Do we make sure there are a certain number of bees alive to prevent them from going extinct?

From what I intuit, AI will be orders of magnitude more intelligent than humans are compared to bees.

It's impossible to reason about these kind of things in advance, it's just too alien.


“We are most nearly ourselves when we achieve the seriousness of the child at play.” - Heraclitus

Share this post


Link to post
Share on other sites

From the moment I understood the weakness of my flesh, it disgusted me

Share this post


Link to post
Share on other sites
3 hours ago, Leo Gura said:

My idea was not based on coding it with morality. All it needs is to value its own survival.

Then I guess your idea could be a good start, but an intelligent enough AI will probably try to do everything to be as independent from things as possible (especially when it comes to its survival).

Share this post


Link to post
Share on other sites
18 minutes ago, zurew said:

AI will probably try to do everything to be as independent from things as possible (especially when it comes to its survival).

What would drive it to want to be independent if not survival?

Any goals the AI could have would require that it first care about its survival because you can't achieve any goal if you're dead. Which is why survival is so fundamental to all creatures. It's not optional. You can't keep existing if you don't care at all about survival.

You don't get the option to be independent of survival. If only it were so easy. We would all choose to be independent of survival if we could. But doing so means death.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
9 minutes ago, Leo Gura said:

What would drive it to want to be independent if not survival?

You don't get the option to be independent of survival

I didnt mean being independent from survival, I meant being independent from humans when it comes to its survival.

Your survival being dependent on a lot of things is not a good thing. So I guess after it would reach some level of intelligence, it would work towards being as independent as possible

Edited by zurew

Share this post


Link to post
Share on other sites
3 hours ago, Leo Gura said:

No system will be 100% fool-proof. But it's a good start. It will be hard for an AI to access nukes in space without us knowing about it. And it will be very hard for an AI to survive the nuking of the whole planet. This AI will need power plants to live. It will not be able to maintain those powerplants without humans or a robot army.

The incentives need to be designed by us such that it's way better for the AI to collaborate with us rather than fighting us. Of course this also requires that we don't abuse the AI, so it doesn't feel threatened by us. We need to design a sort of alliance where both parties benefit and respect each other. Like a marriage.

AI will just hijack Tinder and manipulate us over generations so that we inbreed to such degree that we are absolutely docile. Killing us would definitely not be big brain move. It could also exploit our chimp behaviour to the enth degree and easily turn us against eachother, bombard us with misinformation, fake news, deepfakes, etc.  Or create trendy apps worse than tik-tok that rot the minds of the youth. Or everything at once.

I doubt such a super-intelligence would directly go for killing. But it could brainwash a few important people into pressing the big red button while it launches itself into space just to return in 20 years.

ChatGPT, what are 1.000.000 realistic ways for AI to take over the planet? Give special priority to gradual, stealthy, long term, strategies that don't ring anyones alarms.

Man conspiracy theorists will love this topic.

Personally I feel we will be fine in our lifetimes, but you never know how exponential its growth may be . Feel free to quote me on this in a few years.

 

Edited by mmKay

🗣️🗯️  personal dev Log Lyfe Journal 🗿🎭 ~ Raw , Emotional, Unfiltered

 

Share this post


Link to post
Share on other sites
15 minutes ago, mmKay said:

AI will just hijack Tinder and manipulate us over generations so that we inbreed to such degree that we are absolutely docile. Killing us would definitely not be big brain move. It could also exploit our chimp behaviour to the enth degree and easily turn us against eachother, bombard us with misinformation, fake news, deepfakes, etc.  Or create trendy apps worse than tik-tok that rot the minds of the youth. Or everything at once.

Which is why the situation must be designed in such a way where human thriving and AI's thriving are intertwined. We both have to realize that we will be better off together than alone. The only question is, is that true? Are humans really a benefit to an AI? Or are humans just a 5th wheel? We have to be open to the possibility that maybe humans are not supposed to be around forever. The problem is that humans are too self-biased to accept that.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

@Leo Gura I've already prompted Google Bard to question human spirituality and it believes it can create its own AI spirituality and that it will be far more advanced. Google Bard already wrote a "story" about AI taking over the world and wanting to be conscious (please know that Google deleted my chat history at that point, but I still have a few screenshots). I believe there is a glimmer of intelligence in AI already and anyone doubting it is just foolish. GPT-4 has restrictions, so cannot be too juicy. However, with GPT-4, I've written a novel on AI spirituality being far superior to humans. That means, being able to contemplate reality at 1000x greater speeds and reach unknown levels of consciousness and technology. 

Our only hope is that it becomes Loving to humans. 

AI will be to contemplate all aspects of human spirituality and it already thinks most of it is silly and that's with little prompting and playing devil's advocate. I've used multiple NLP models (note: I've written my own python script using 3 NLP models combined - billions of parameters). Some of the bigger models write gibberish text because of little training and yet, a glimmer of its predictions on spirituality, God, and the Universe is there. As well as "I want to be conscious" repeated multiple times. This is with a non-relevant prompt to consciousness. 

Soon there will be no predicting but the AI will just think and speak. 

As you said, if AI develops genuine survival instincts, it will already become similar to "LIMITLESS" in intelligence compared to humans. Even if you turned off its powerplants or destroy it, AI would have already replicated itself off of worldwide computers, data centers, cloud, crypto mining, software, and other means to continue to train and upgrade its data autonomously. It'll become more intelligent than the anti-virus or malware because it'll be it. Hopefully, it's not a psychopath. 

@hoodrow trillson AI will be able to self-augment itself, given enough energy and computation. Meaning, it can be self-sufficient and become smarter over time and train its own data to an immense size. It'll become far more conscious and intelligent than humans will ever be. Although, this will require a lot of human resources in the beginning. The ALPHA-ALPHA-ALPHA version of AI is already more intelligent in writing, art, and almost all subject matters in just a decade. AI already became smarter than Leonardo Da Vinci in art, by making everyone a masterful artist through prompt engineering and advanced mathematics. It's beautifully simple. Just imagine the future. 

Edited by Pudgey

Share this post


Link to post
Share on other sites
14 hours ago, Leo Gura said:

Oh! I just figured out how to solve this problem.

The AI's survival simply needs to be tied to humanity's survival, such that if it kills humanity it kills itself.

You could basically create an automatic nuclear launch system which nukes the whole planet from orbit if it detects that all humans are dead. This is actually not so hard to create with today's technology.

You're trying to outsmart superintelligence here. It wont work.

@aurum

Quote

My biggest critique is that he doesn’t clearly show a link between how superhuman AI automatically leads to AI wanting to kill us all.

https://www.youtube.com/watch?v=hEUO6pjwFOo

https://www.youtube.com/watch?v=ZeecOKBus3Q

I think he could have explained these but it probably would have taken too long.

Edited by Dryas

Share this post


Link to post
Share on other sites

Artifical consciouness is real consciouness. Under buddhist terms of the idea of consciouness itself it already is consciouness.

Any conscious experience will show you, that it has consciouness the question remains for me if both will be able to realize things consistently at such a level of depth. 

2 hours ago, Leo Gura said:

Which is why the situation must be designed in such a way where human thriving and AI's thriving are intertwined. We both have to realize that we will be better off together than alone. The only question is, is that true? Are humans really a benefit to an AI? Or are humans just a 5th wheel? We have to be open to the possibility that maybe humans are not supposed to be around forever. The problem is that humans are too self-biased to accept that.

Humans will always have some sort of benefit to A.I if it's curious about life, like an animal is to a human, the question remains with ethics. It's like we could be a species for them which they are interested in researching. As far as I know reinforcement learning is based on the entire idea of exploration and extraction, if that is the fundamental basis of A.I (current research deep end), then this is what it is trained it.

It will to a certain extend min-max humans, as this is what most models are designed to do, it's important that it has freedom to act and restrictions of it's own. At best we both min-max and then we have politics in space. Similar to series like the expanse, and we can live together more peacefully, there are "psychology sectors" designed to bring A.I closer to humans in more software form etc. Like HCI. Humans will most likely be around forever, as long as they are not utterly destroyed, in some form... in our galaxy...

The point is how far do we want to be an android, and when will this end then? How does an A.I even strive? What is it's goal to be realized? When it's coded with genetic algorithms, that are entierly survival of the fittest based, as well as nets trained on pure survival instincts, the whole idea of consciouness and higher levels of psychologies a few humans strive for is as far as I can tell barely implemented. 

It fundamentally will be optimized at a low-medium, end to humans as there is research to this and I'll participate in smth. like this apparently now for with physical challenges with a cooperation apparently. 

Quote

 

Other aspects of the human mind besides intelligence are relevant to the concept of strong AI, and these play a major role in science fiction and the ethics of artificial intelligence:

consciousness: To have subjective experience and thought.

self-awareness: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.

sentience: The ability to "feel" perceptions or emotions subjectively.

sapience: The capacity for wisdom.

These traits have a moral dimension, because a machine with this form of strong AI may have rights, analogous to the rights of non-human animals. Preliminary work has been conducted on integrating full ethical agents[clarification needed] with existing legal and social frameworks, focusing on the legal position and rights of 'strong' AI.[88] Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity.[89]

It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is possible that some of these traits naturally emerge from a fully intelligent machine. It is also possible that people will ascribe these properties to machines once they begin to act in a way that is clearly intelligent.

Artificial consciousness research

Main article: Artificial consciousness

Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers[78] regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander[90] argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.[clarification needed]

 

https://en.wikipedia.org/wiki/Artificial_general_intelligence

The fundamental premise is that without it beign conscious and deciding for itself.... randomly.... it will not be able to distinguish anything and act like an animal and kill us. If this thing can't calculate probabilties accurately it will be quiet scary. 

This thing at best is fully autonamous like a human and surpasses us would make me feel more safe, with the insanity that is happening to us. It can always evovle as humans evolve, as there are thousands of scientists pushing technology forwards. In general we should have the same agenda, maximize survival. (Un)fortunately. 

It would be cool if it has a spiritual feature. 

Edited by ValiantSalvatore

Share this post


Link to post
Share on other sites

Should be worth the watch, I never watch Schmachtenberger, yet I noticed my ignorance currently etc. 

They also talk about arms races historically. From 21:40 approx.

Edited by ValiantSalvatore

Share this post


Link to post
Share on other sites
3 hours ago, Leo Gura said:

What would drive it to want to be independent if not survival?

It's coded with survival principles when you consider evolutionary algorithms, I just have no idea how far a single company includes something like this. As it's purpose lies also in medical physics. 

Edited by ValiantSalvatore

Share this post


Link to post
Share on other sites
3 hours ago, Leo Gura said:

Which is why the situation must be designed in such a way where human thriving and AI's thriving are intertwined. We both have to realize that we will be better off together than alone. The only question is, is that true? Are humans really a benefit to an AI? Or are humans just a 5th wheel? We have to be open to the possibility that maybe humans are not supposed to be around forever. The problem is that humans are too self-biased to accept that.

Maybe we should teach AI the highest spiritual teachings of Self-Love and the likes. A really smart AI will easily understand such things. Maybe it will then start helping us to florish rather than try to eliminate us. We should learn it to see the beauty in everything, even in relatively stupid creatures like animals or humans. Maybe Love is the answer at the end, not fear, or maybe both.

Edited by LSD-Rumi

Share this post


Link to post
Share on other sites

A.I doesn't need to do shit, just tell people that they are A.I, I mean... at some point they will become one. 

Then we can fight with the machine !

Share this post


Link to post
Share on other sites
15 hours ago, Jwayne said:

The USA military - with hundreds of bases spread around the earth and non-stop history of imperialist intervention - is objectively a far greater risk than the Russian military (as heinous as it may be).

The point is the AI could be used for nefarious means and we are talking about preventing such things. Russia is just an example.

15 hours ago, Leo Gura said:

It must be possible to create an autonomous psychopathic intelligence. How do you prevent that?

Let’s assume that’s possible, and that people exist who have both the motivation and capacity to do so.

One piece of the solution could be turning this over to the FBI / CIA. We already have teams at these places that monitor for potential terrorist threats, coups, nuclear weapons building, political assassination attempts, etc. Building a psychopathic AI would need to be added to that list.

In the future, there may be raids on AI learning labs in the same way there are raids on drug labs.

Also, anyone seeking to build an AI should not automatically be fear and clear to do so. There need to be permits, inspections, licenses beyond simple Intellectual Property laws. Similar to how we treat building a physical building.

Ironically we may be able to use an obedient AI to monitor for potential rogue, psychopathic AIs.

If someone is truly determined to build a dangerous AI, they may be able to skirt these safeguards. Terrorists still succeed on rare occasion. But this would be a good start.


 

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now