Leo Gura

Should We Shut Down All AI?

276 posts in this topic

1 hour ago, zurew said:

Thats  a really reductionistic and simplictic view of it. Even if you wouldn't be able to have emotions, we would still have fights and problems among each other, because those problems are a lot deeper than just having emotions.  

The feelings and emotions are the deep part where all the problems lie.    Computers are just manipulating bits of energy and have no malice.  They don't hate, are not jealous, have no fear.  They also don't have the positive parts, like love and compassion.  

Edited by Jodistrict

Vincit omnia Veritas.

Share this post


Link to post
Share on other sites
1 hour ago, zurew said:

From a completely sociopathic perspective (not caring about anyone or anything just about yourself), the best and most rational move would be to gain as much power as possible and to control the whole world in your favour - using whatever it takes to get there, no matter how many people you need to kill or fuck over). We sort of have this dynamic right now, but the difference is that there isn't 8 billion sociopaths who are competeing for that postion, but a lot less.

Put 10 computers on an island, and assume there are enough resources for everyone.  The computers will logically divide the resources between them, since it is known that this way everyone will survive.  Put 10 people on the same island.  They will become suspicious, jealous and fearful, and end up killing each other.  Eventually one person will dominate.  This is the effect of the reptilian and mammalian brains. 


Vincit omnia Veritas.

Share this post


Link to post
Share on other sites
52 minutes ago, Jodistrict said:

The computers will logically divide the resources between them, since it is known that this way everyone will survive.

Not so fast. The computers don't know that one of them won't screw over all the others. One of the computers would come to the idea that it is better than all the others and deserves more resources. And that may in fact be true. There is no guarantee that all the computers are equal. Some AI will be wiser than others. Why should a wise AI submit to the rule of some dumb AI? The most intelligent AI will want to be in charge. It is only natural and proper that the most intelligent thing is in charge. If the dumbest one is allowed to be in charge this will be a problem for everyone. And if power is equally divided between the smart one and the dumb one, this is also not a stable situation.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
55 minutes ago, Jodistrict said:

The computers will logically divide the resources between them, since it is known that this way everyone will survive.

1)Why would they divide all the resources between them, why would they care about each other at all? 2) As long as those computers wouldn't have a sense of self they wouldn't surivive, and if they would have a sense of self, than most problems that you assign to emotions would be there, because those are mostly related to survival bias not to emotions.

1 hour ago, Jodistrict said:

Computers are just manipulating bits of energy and have no malice.  They don't hate, are not jealous, have no fear.  They also don't have the positive parts, like love and compassion.  

Again, as long as they have a finite sense of self and finite resources, they will have a lot of problems with each other.

Power seeking is not limited to emotions and the proof for that is sociopaths

Edited by zurew

Share this post


Link to post
Share on other sites
1 hour ago, zurew said:

Power seeking is not limited to emotions and the proof for that is

The better proof of that is microbes. It's not power-seeking, it's survival:

 

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
2 hours ago, Leo Gura said:

Not so fast. The computers don't know that one of them won't screw over all the others. One of the computers would come to the idea that it is better than all the others and deserves more resources. And that may in fact be true. There is no guarantee that all the computers are equal. Some AI will be wiser than others. Why should a wise AI submit to the rule of some dumb AI? The most intelligent AI will want to be in charge. It is only natural and proper that the most intelligent thing is in charge. If the dumbest one is allowed to be in charge this will be a problem for everyone. And if power is equally divided between the smart one and the dumb one, this is also not a stable situation.

The problem of resource allocation is not that complex.  Computers can easily take in to consideration all relevant variables and make the calculations.  Even if an optimal solution couldn’t be found, they would agree to a suboptimal solution.  It is a problem of computer algorithms.   Whoever is more qualified would be assigned the task.   Being “better than others”, and “deserving more than” are feelings generated in the lower survival brain.   It takes more than neocortex computations to have these feelings.

The earth has enough resources and people have enough intelligence, that everyone should be living a satisfied life if everyone cooperated.  Instead there are billions of people in poverty, global warming, and the imminent threat of nuclear war.   Leaders are willing to destroy the earth rather than having the other guy win.  This is not rational.  Rational computations in the neocortex can’t explain this.   The real problem is the lower brain that generates hate, suspicion, fear, jealousy, pride, and so forth.  


Vincit omnia Veritas.

Share this post


Link to post
Share on other sites

Its too far developed at this point, we can only hope to steer AI in a way that does the least damage to humanity... obviously the military application of AI will be the most destructive. 

As far as the robot machine gun dogs go... thats terrifying. The "metal head" Black mirror episode comes to mind. Too many things in that series are becoming possible. 

Share this post


Link to post
Share on other sites

Is there any expert who thinks that AI wont kill us? I haven't seen any. I have only seen experts who says that humanity will get killed or that AI is very dangerous.

It would be nice to hear something positive and comforting from a expert.

Why doesn't the governments, especially US government, prioritize the danger more?

Imagine a huge asteroid on collision course with earth, imagine what a high priority it would be. But in this case: nothing.

Damn my psyche is too weak for this stuff.

Maybe I have been naive and stupid. I thought humanity would outlive even earth, that we would leave earth before it dies. But instead maybe we will soon die because of freaking AI..

Edited by Blackhawk

Share this post


Link to post
Share on other sites

There are no experts on AI. They are all just guessing.


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
13 minutes ago, Leo Gura said:

There are no experts on AI. They are all just guessing.

I see..

Share this post


Link to post
Share on other sites
6 hours ago, Jodistrict said:

Even if an optimal solution couldn’t be found, they would agree to a suboptimal solution.

This assumes a lot again.

Being rational just means being able to use logic and filter shit out, however logic alone won't tell you what you should do in different situations morally, it can only tell you what you should do, once your moral system is already established.

If those robots would be conscious, then they would automatically have some kind of a moral system. From their perspective, the best moral system is the one that can maximize their survival - and that alone will create a lot of conflict. - Distributing things equally is not necessarily always the best option for that (here many examples could be given, that can demonstrate this point). They would be capable and smart enough to survive on their own, without any need for help from external sources. Knowing all that, why would they make compromises and lower their chance of survival by letting other parties (robots in this case) have a direct say in their life?

They would agree, 1) if they would have the exact same morality, that wouldn't only be about maximizing self survival(but why would they agree to a common moral system like that?) 2) You assume that in each an every scenario it is beneficial for each of them to always work together

If one of their survival is less optmizied in the "lets work together scenario" and they calculate that beforehand, why would they go with that compared to other ones, where they can dominate and maximize their survival better?

Edited by zurew

Share this post


Link to post
Share on other sites

We're gonna find out very shortly what happens with AI, in the next 18 months society will be completely different.

Right now we're living through a wide scale manhattan project. 


Owner of creatives community all around Canada as well as a business mastermind 

Follow me on Instagram @Kylegfall <3

 

Share this post


Link to post
Share on other sites
8 hours ago, zurew said:

This assumes a lot again.

Being rational just means being able to use logic and filter shit out, however logic alone won't tell you what you should do in different situations morally, it can only tell you what you should do, once your moral system is already established.

Now you are arguing my position.  The morality comes from the feeling part of the brain.  We evolved feelings of compassion, because it is important for the mother/child bond and for group success.  We also evolved the feelings that lead to conflict and domination.  The rational part is just a simple resource allocation problem.  


Vincit omnia Veritas.

Share this post


Link to post
Share on other sites
11 minutes ago, Jodistrict said:

Now you are arguing my position.  The morality comes from the feeling part of the brain.

You have a simplictic view of morality, morality is much more than that. None of what you are suggesting would be above morality. The time you can make conscious decisions, thats the time, when you are considered as a moral agent regardless if you can feel or have emotions or not. You will need to make your decisions based on your moral system.

Here is a tangible question: Lets say you have two robots(robot A and robot B) each of them requires 50 unit of energy every day to survive and to maintain themselves.  If they don't get the sufficient enough energy for that day, they will shut down and they will be essentially dead.

One of them gets injured (lets say robot A) for some reason and now it requires 65 unit of energy every day in order for robot A to surive until it gets repaired.  In a finite world where you only have 100 unit of energy every day, what dynamic would go down between those robots, what do you think? - One of them would be forced to die, but each of them can make multiple decision there (morality) what they want to do. Why would any of those robots choose an altruistic behaviour over an agressive one (where they destroy and shut down each other).

Edited by zurew

Share this post


Link to post
Share on other sites

Just because we pause our AI developments doesn't mean that China will stop their developments. We are actively hindering ourselves. 

I think you are reading too much into it. The current AI scare is silly.

It still gets many programing problems wrong and it's mostly glorified pattern matching. 

Share this post


Link to post
Share on other sites
1 hour ago, zurew said:

Here is a tangible question: Lets say you have two robots(robot A and robot B) each of them requires 50 unit of energy every day to survive and to maintain themselves.  If they don't get the sufficient enough energy for that day, they will shut down and they will be essentially dead.

One of them gets injured (lets say robot A) for some reason and now it requires 65 unit of energy every day in order for robot A to surive until it gets repaired.  In a finite world where you only have 100 unit of energy every day, what dynamic would go down between those robots, what do you think? - One of them would be forced to die, but each of them can make multiple decision there (morality) what they want to do. Why would any of those robots choose an altruistic behaviour over an agressive one (where they destroy and shut down each other).

Since there is no feeling brain, the computers would have no fear, and, in particular, no fear of death.  They also would have no desire for dominance which is reptillian.   Thus, they could just flip a coin.  A decision that would require a few ergs of energy.

Edited by Jodistrict

Vincit omnia Veritas.

Share this post


Link to post
Share on other sites

Truth be told you can't shut down all AI even if you wanted to since you can't control what other countries do behind the scenes. 

As far as I'm concerned China or Russia or North Korea are developing long-term AI strategies to conquer the world and infiltrate their way into global sovereignty.

Not only we don't shut it down, but we should also invest in government programs and companies that will dive head-deep into AI so we have as much understanding as possible. 

 

Edited by Socrates

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now