Leo Gura

Should We Shut Down All AI?

280 posts in this topic

1 hour ago, Leo Gura said:

I don't know that playing Game B will help you survive when you are encircled by sharks.

Thats why (if you reread what I wrote about the positive multipolar trap) have to be implemented. Its not a simple stage green "lets moralise the fuck out of everything and then naively hope that everyone will get along with us" Its more like implementing a dynamic that is just as if not more effective in the context of game A than other game A tools ,but at the same time lays the groundwork for game B (so game A pariticpants will be incentivised to implement the tool ,and then by implementing it, the tool itself will slowly but surely change the inherent game A countries and structures).

1 hour ago, Leo Gura said:

Look, we've had nukes, chemical weapons, and biological weapons for 70 years and we've managed to do pretty well. Humans aren't all that stupid.

Sure, but have we had a scenario where almost every individual can have access to tools that are just as if not more dangerous than nukes? The biggest point of game B is to build a structure where we can collaborate and somewhat live in peace.

In the current game A structure some forms of collaboration is impossible, even though that kind collaboration is what would be needed to solve certain global problems. The world kind of still gets along with some countries having nukes and they can somewhat manage each other so that they don't have to kill each other (although if we look at the current situation thats very arguable). Now imagine a scenario where actually billions of people have nukes and they all need to manage and collaborate with each other.

1 hour ago, Leo Gura said:

At the end of the day if this AI gets out of hand we will just bomb the shit out of it. It's not some invincible monster.

That would be cool if we assume that 1) we can track it exactly when it will get out of hand and 2) The AI won't deceive or trick us 3)If the AI won't be conscious and will 100% do what we want, then we need to assume that no one will try to use it to fuck everything up (bad intentions aren't even necessary for this). 

Edited by zurew

Share this post


Link to post
Share on other sites
22 minutes ago, Leo Gura said:

Notice that humans still behave like animals without any technology.

Yes, I agree with this part. It's a huge problem.

Some people talk about transhumanism as a solution (I mean becoming hybrid with machines) but I believe there's a way to keep our animal bodies and transcend our instincts at the same time, without becoming some horrid cyborg monsters.

Do you feel that spiritual training on a global level could make humans transcend the lowest parts of their egos? I have noticed it in myself that the more I become conscious the more I can act as a better human being. Not because of morals, but because I can sense how my actions are damaging the world.

I think that implementing spiritual training in schools would make the world transcend to Tier 2 rather quickly. The problem is that very few people are willing to lessen their sense of self :/ Oh. And science doesn't like that idea at all.


Inquire in the now.

Feeling is the truest knowing ?️

Share this post


Link to post
Share on other sites
33 minutes ago, zurew said:

Thats why (if you reread what I wrote about the positive multipolar trap) have to be implemented. Its not a simple stage green "lets moralise the fuck out of everything and then naively hope that everyone will get along with us" Its more like implementing a dynamic that is just as if not more effective in the context of game A than other game A tools ,but at the same time lays the groundwork for game B (so game A pariticpants will be incentivised to implement the tool ,and then by implementing it, the tool itself will slowly but surely change the inherent game A countries and structures).

Well, I'm open to hearing these systems you allude to. What are they? And are they actually something that can pass through our political system, or are they pie-in-sky stuff?

Quote

Sure, but have we had a scenario where almost every individual can have access to tools that are just as if not more dangerous than nukes?

There exists nothing even remotely close to that. Even your wildest AI scf-fi fantasy is not as dangerous as a nuke. And no normie person will get access to such AI. Military style AIs will be locked away just like nukes.

Quote

That would be cool if we assume that 1) we can track it exactly when it will get out of hand and 2) The AI won't deceive or trick us 3)If the AI won't be conscious and will 100% do what we want, then we need to assume that no one will try to use it to fuck everything up (bad intentions aren't even necessary for this). 

There will never be a foolproof gauarantee of safety against AI anymore than there are for nukes or viruses. There is always a small risk of something bad slipping through our guard. The solution is just to guard better.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
52 minutes ago, Leo Gura said:

There exists nothing even remotely close to that.

Thinking in the context of exponential tech I don't think we are too far away from it. Just if you take a look at the current AI development (just in one year how much it has evolved), then I don't know if we can actually properly compherend whats being exponential looks like as time goes on. In the past, exponential development might have meant increasing by 5 developmental units  in one year, now it might be 10000 and then eventually millions and then billions of developmental units every year.

52 minutes ago, Leo Gura said:

Well, I'm open to hearing these systems you allude to. What are they?

Again, before we get there, first we have to agree on the framework. Once you can agree with the framework at least on an intellectual and logical level and you don't see contradictions in it and see how it could be utilized, after that we can talk about specific examples of it or even about the creation of specific examples of it. If you don't even agree with the framework, than no example will be sufficient enough for you.

In other words (to use your words from one of your video), right now I want to make you agree on the the framework (structure) and then we can talk about what kind of variables(content) could be placed in that structure.

Or in other words, I want you to agree with the main/general function, that could be used to generate the specific examples that you ask for.

So to not be too abstract about it (For the record I haven't thought deeply about specific examples of it yet, just thought on a very surface level way about a few one) , even if I would bring up the same example that Daniel brought up about a more transparent kind of government structure and you would be able to reject it - you rejecting that example wouldn't prove that the framework itself is wrong, it would only prove that the example was bad.

Right now, the goal would be to first agree on the framework and about its necessity, and then after that (comes the hard part) collectively think and inquire very hardly and deeply about proper examples, that could represent that framework.

52 minutes ago, Leo Gura said:

There exists nothing even remotely close to that. Even your wildest AI scf-fi fantasy is not as dangerous as a nuke. And no normie person will get access to such AI. Military style AIs will be locked away just like nukes.

Having access to a superintellgence(that can decieve, come up with plans, help you with any kind of knowledge, help you or even automatically build different kind of weapons, generate billions amount of misinformation etcetc - basically amplify anyone's power and intentions by a 1000-10k fold) in your basement seems pretty dangerous to me.

If you talk about people not having access to it, just only some elites or the owners of the company, that also seems pretty fucking if not even more scary to me. A guy or a handful of people having access to a technology that could be used to dominate and terminate the whole world with and to force your ideology and morals onto the world?

I only see two ways out from that problem; either elevate humanity to the next level socially and spiritually and psychologically or create a wise AI that can actually think for itself and won't help or even prevent everyone from fucking up the whole world.

Edited by zurew

Share this post


Link to post
Share on other sites
43 minutes ago, zurew said:

Thinking in the context of exponential tech I don't think we are too far away from it. Just if you take a look at the current AI development (just in one year how much it has evolved), then I don't know if we can actually properly compherend whats being exponential looks like as time goes on. In the past, exponential development might have meant increasing by 5 developmental units  in one year, now it might be 10000 and then eventually millions and then billions of developmental units every year.

Again, before we get there, first we have to agree on the framework. Once you can agree with the framework at least on an intellectual and logical level and you don't see contradictions in it and see how it could be utilized, after that we can talk about specific examples of it or even about the creation of specific examples of it. If you don't even agree with the framework, than no example will be sufficient enough for you.

In other words (to use your words from one of your video), right now I want to make you agree on the the framework (structure) and then we can talk about what kind of variables(content) could be placed in that structure.

Or in other words, I want you to agree with the main/general function, that could be used to generate the specific examples that you ask for.

So to not be too abstract about it (For the record I haven't thought deeply about specific examples of it yet, just thought on a very surface level way about a few one) , even if I would bring up the same example that Daniel brought up about a more transparent kind of government structure and you would be able to reject it - you rejecting that example wouldn't prove that the framework itself is wrong, it would only prove that the example was bad.

Right now, the goal would be to first agree on the framework and about its necessity, and then after that (comes the hard part) collectively think and inquire very hardly and deeply about proper examples, that could represent that framework.

Having access to a superintellgence(that can decieve, come up with plans, help you with any kind of knowledge, help you or even automatically build different kind of weapons, generate billions amount of misinformation etcetc - basically amplify anyone's power and intentions by a 1000-10k fold) in your basement seems pretty dangerous to me.

If you talk about people not having access to it, just only some elites or the owners of the company, that also seems pretty fucking if not even more scary to me. A guy or a handful of people having access to a technology that could be used to dominate and terminate the whole world with and to force your ideology and morals onto the world?

I only see two ways out from that problem; either elevate humanity to the next level socially and spiritually and psychologically or create a wise AI that can actually think for itself and won't help or even prevent everyone from fucking up the whole world.

That's how deep the multipolar trap is. You can be System B in a System A world, but that would drastically reduce your competitiveness in a System A world. As long as people value having a harem and 33 bugattis over the well-being of mankind and the environment, we will live in this system. Humanity as a whole has to see that true happiness and fulfillment doesn't come from stage blue/orange goals, we won't dismantle the structures that stop us to develop into system B. 

We already have the tech to globally cooperate, educate, and develop people, but if our incentives are not facing that direction, we won't get there. Unfortunately, we will suffer from our collective ignorance until we're forced to evolve beyond our current paradigm. I will work to educate and develop people as best as I can, and that's what I would recommend you to do too. That's the best we can currently do. Until then, we have to hedge ourselves the best we can to avoid assured mutual destruction and reduce international tensions the best we can.

At the end of the day, if you take this issue seriously, instead of promoting a whole systematic change, do your best to promote the development of those people. Can you help your close family and friends to develop? Can you show people that are blind or myopic that there are different ways of seeing? I think that's the most positive change we can do. Don't worry about the large scale. Shit's always gonna happen before evolution kicks in. 

Share this post


Link to post
Share on other sites
7 minutes ago, Israfil said:

You can be System B in a System A world, but that would drastically reduce your competitiveness in a System A world.

What Im talking about is that the assumption of "game B methods,tools will be necessarily less effective and useful in a game A world" is not necessarily true, and don't necessarily have to be that way by nature.

The time we can come up with such examples, where a game B tool is more effective and useful(even in the context of game A) than other game A tools but at the same time can trigger some internal change in the current Game A world, structres, then we have a framework that we can use to start to move towards an actual game B world.

For the sake of working with something tangible (you don't need to agree with the premise, I just bring this up to have something tangible to work with). The easiest example that people often time bring up is capitalism vs socialism. That socialism is not as effective at game A compared to capitalism therefore it will eventually fail because of outside pressure from capitalist countries, but what if there is an actual socialist framework that is more effective at game A things than capitalism? If there is such a system, and a country would start to implement that kind of system, eventually everyone would be forced to implement that kind of socialist system, because if they don't, they will be left behind economically and they will slowly lose their political power, therefore they will be incentivised to either create and then implement an even more effective framework or to implement that socialist framework.

Share this post


Link to post
Share on other sites
6 minutes ago, zurew said:

The time we can come up with such examples, where a game B tool is more effective and useful(even in the context of game A) than other game A tools but at the same time can trigger some internal change in the current Game A world, structres, then we have a framework that we can use to start to move towards an actual game B world.

For the sake of working with something tangible (you don't need to agree with the premise, I just bring this up to have something tangible to work with). The easiest example that people often time bring up is capitalism vs socialism. That socialism is not as effective at game A compared to capitalism therefore it will eventually fail because of outside pressure from capitalist countries, but what if there is an actual socialist framework that is more effective at game A things than capitalism? If there is such a system, and a country would start to implement that kind of system, eventually everyone would be forced to implement that kind of socialist system, because if they don't, they will be left behind economically and they will slowly lose their political power, therefore they will be incentivised to either create and then implement an even more effective framework or to implement that socialist framework.

I'm sorry, but this is too much wishful thinking. Of course, if there's a more effective system, people would migrate to it. The point is that effectiveness is deeply correlated with your goals. If your goal is to multiply capital, no system is more effective than the one we're currently living in. Only changing the goal we can change the game.

Share this post


Link to post
Share on other sites
4 hours ago, Leo Gura said:

The Russian model has little chance of outcompeting the US, but the Chinese model seems very competitive

I will be surprised if China will exist as a state like it is today by 2030. Their short competitiveness fueled by an authoritarian government is soon going to pay it's long term price.

To make matters worse, they may try to invade taiwan. This is a serious possibility. That will be the nail in the coffin for China. 

Share this post


Link to post
Share on other sites
1 hour ago, Israfil said:

I'm sorry, but this is too much wishful thinking.

How much have you thought about this concept , before you immedately rejected it like a knee jerk reaction? Do you actually know that what your assumption about this is necessarily the case, or you just assume it?

How can you reject it on a logical level? Because what you are saying here is almost as if you would say that methods and tools that are more suitable and effective for long term gain will necessarily be less effective compared to other things in short term. Is that actually necessarily true in every instance or just an assumption?

I don't see how deductively you can get to your conclusion from the premise of something being better for long term. How is something being better for long term will necessarily be worse for the short term, walk me through the deduction, because I don't see it. Using inductive reasoning I could see,but deductively(when a certain logical conclusion will come with 100% certainty and accuracy from the premise(s)), thats what I don't see.

1 hour ago, Israfil said:

If your goal is to multiply capital, no system is more effective than the one we're currently living in.

Thats a simplistic way to think about this concept.  You almost treat this subject as if we would talk about physics (where we can deductively run down our logic and say with almost 100% accuracy whats gonna happen and how things will turn out). With subjects like this (that are heavily affected by human psychology and behaviour) thats not the case, and you also assume that we have actually tried all kinds of frameworks already and we know for sure, that the current one is the best and the most effective that we can come up with. Thats like saying our evolution is done and there is nowhere to develop or go anymore.

Sometimes if you goal is too narrow, the optimization for it won't work too well, because sometimes there are other variables that can have a direct effect on that goal, but you won't recognize it , because the framework that you think in and work with is too narrow.

So for example, you might think that by not having any regulation on the market will necessarily generate more capital than if you do have some regualtions on it. I bet that especially long term, you will generate a lot less capital overall if you let every shady shit to happen and to psychologically and mentally and physically fuck up people by making them addicted to a bunch of things.

Thats just one example, where on the most surface level one idea might seem cool, but if we think about it 1 more layer down the road, we can immedately recognize that, thats not necessarily the case

Can you actually show me an example of a socialist system ,where there is central planning and it is optimized for generating capital? If not, then how can you be so sure, that it wouldn't be more effective, than letting the market do its own thing?

Edited by zurew

Share this post


Link to post
Share on other sites
5 hours ago, zurew said:

If they would have no fear of death, then they would die really fast, because they wouldn't have a really strong incentive to maintain their survival. You can't maintain or create a society who don't give any fucks about their survival.

You can't really escape this problem. If you are talking about AIs who don't care about death, then they would have even less incentive to corrabolate, if you do talk about AIs who care about survival, then we are back to square 0, where they will be forced to make certain decision that will go against each others interest - therefore will make corrabolation hard, and deception and manipulation will kick in - and we are back to the same problems we have in our society (regardless if you take out emotions from this equation or not).

Humans have a survival instinct, but millions die in wars, and the world is on the brink of nuclear destruction.   It’s not the rational calculating part of the brain that causes that.  It’s the reptilian – domination, greed, hate, fear.  Think about it and you will see that you can’t have the motiviation for “manipulation” and “deception” without emotions.  We have enough left-brain intelligence to solve our problems and enough resources if they are used wisely.  That isn’t where the problem lies.


Vincit omnia Veritas.

Share this post


Link to post
Share on other sites
3 minutes ago, Jodistrict said:

It’s not the rational calculating part of the brain that causes that.

Having only a rational calculting brain, without any instinct for survival = death. If you think that having an instinct to survive is an emotion, then sure frameing that way your argument could be correct, but by that framework you are arguing for something that necessarily lead to extinction and death, because without having an instinct for survival, why would anyone or anything want to maintain their survival?

Edited by zurew

Share this post


Link to post
Share on other sites
2 hours ago, zurew said:

How much have you thought about this concept , before you immedately rejected it like a knee jerk reaction? Do you actually know that what your assumption about this is necessarily the case, or you just assume it?

How can you reject it on a logical level? Because what you are saying here is almost as if you would say that methods and tools that are more suitable and effective for long term gain will necessarily be less effective compared to other things in short term. Is that actually necessarily true in every instance or just an assumption?

I don't see how deductively you can get to your conclusion from the premise of something being better for long term. How is something being better for long term will necessarily be worse for the short term, walk me through the deduction, because I don't see it. Using inductive reasoning I could see,but deductively(when a certain logical conclusion will come with 100% certainty and accuracy from the premise(s)), thats what I don't see.

Reality is constantly showing you, me, and everyone else that we have the capabilities, but not the incentives for creating the long-term systems that will ensure our generational survival. Because the whole point of a short-term strategy is that you sacrifice sustainability for maximizing gains. If I kill the whole forest, I might not have wood in 20 years, but I have plenty of wood now. The point of our productive system is maximizing profit and capital accumulation. You can only be sustainable in this environment if competition is regulated. Otherwise, people that are willing to sacrifice everything to rise to the top of a specific industry will do so, simply because they will have more short-term resources that will smother your more sustainable, but less flexible.

This is basic temporal preference theory. Contemporary democracy and capitalism simply do not allow long-term plans and investments to flourish as the most prevalent incentives are to maximize short-term gain. You won't face the consequences of a bad government as a president, so you can take decisions that will give you short-term popularity costing long-term stability. Go see US' Debt/GDP ratio and you have an example of no incentive for being long-term orientated.

Same with business. As for the private sector, the rate of natural resources exploration, construction sand, and Silicon (the second most abundant element in the earth's crust) are both absurd examples of how much we are incentivized to disregard our long-term survival in favor of competitivity. We literally are running out of sand for construction. We had a silicon crisis for computer chips back in 2020. If Nvidea doesn't explore like this, Intel will, and they will kill Nvidea`s business and keep exploring Earth anyway.

So System B societies can only flourish if they have a huge network of support to do so. And there is a heavy incentive for corrupt System A nations to not lose their position, because in those systems, social mobility is more limited and there are bigger privileges for being on top. I hope I made clear that my objection is not dogmatic, but based on the interaction between rational agents. 

I always recommend the classic Meditation on Moloch. A nice little summary of Game Theory. 

Share this post


Link to post
Share on other sites
1 hour ago, Israfil said:

This is basic temporal preference theory. Contemporary democracy and capitalism simply do not allow long-term plans and investments to flourish as the most prevalent incentives are to maximize short-term gain. You won't face the consequences of a bad government as a president, so you can take decisions that will give you short-term popularity costing long-term stability. Go see US' Debt/GDP ratio and you have an example of no incentive for being long-term orientated.

Same with business. As for the private sector, the rate of natural resources exploration, construction sand, and Silicon (the second most abundant element in the earth's crust) are both absurd examples of how much we are incentivized to disregard our long-term survival in favor of competitivity. We literally are running out of sand for construction. We had a silicon crisis for computer chips back in 2020. If Nvidea doesn't explore like this, Intel will, and they will kill Nvidea`s business and keep exploring Earth anyway.

I understand that there are systems that are better for the short term and bad or even counterproductive for the long term, however that doesn't prove that what I suggested is impossible (that something being good for the long term could be good for the shorterm as well). The examples that you brought above, only prove that certain systems that are good for the short term won't be good for the long term and not that something being good for the long term will be necessarily bad for the short term.

I can bring method or system examples where I can demonstrate how something being good for the long term, could also be good for the short term as well and not just that but could be better at other short term tools as well.

So lets say we have a really narrow goal: "to acquire as many wood from the forest as possible". You using a big axe will achieve you having x amount of wood in the short term and maybe 4x amount in the long term. Me inventing a new tool (chainsaw) will help me outwork you in the long term and in the short term as well. This is just one example where creating/implementing a new tool can help you achieve more things in the long and in the short term as well (with that example I proved that it is not logically impossible that something being good for long term could be not just good for the short term ,but could be even better for short term compared to other tools) - It also demonstrates that depending on the context and how we define our goals, it is not a must or a necessity or a given that we will  have to sacrifice more things short term, if we want to get better at long term.

The assumption that there has to be a necessary tradeoff between the long term and the short term is the main problem here.

I know that me being able to prove something being logically possible is far from something being possible to be implemented or even created in the real world, but this is square 1 that we have to agree on. If we can't even agree that it is a logical possiblity, then we can't move forward to the next part of the discussion, where we get into the problems of real world implementation and actual examples of the concept.

Edited by zurew

Share this post


Link to post
Share on other sites
51 minutes ago, zurew said:

So lets say we have a really narrow goal: "to acquire as many wood from the forest as possible". You using a big axe will achieve you having x amount of wood in the short term and maybe 4x amount in the long term. Me inventing a new tool (chainsaw) will help me outwork you in the long term and in the short term as well. This is just one example where creating/implementing a new tool can help you achieve more things in the long and in the short term as well (with that example I proved that it is not logically impossible that something being good for long term could be not just good for the short term ,but could be even better for short term compared to other tools) - It also demonstrates that depending on the context and how we define our goals, it is not a must or a necessity or a given that we will  have to sacrifice more things short term, if we want to get better at long term.

 

You have made a logical mistake here. What determines what strategy maximizes long term wood collection is not the speed of which you acquire wood, but how you manage the size of the forest. It's about the balance between acquiring the resource and recovering the resource that matters. Short term strategies do not care about recovery of the resource, and therefore beat every long term strategy in the short run. Long term strategies will eventually wield more resources, but they require the management of exploitation in scales that, in many cases, exceed human lifetimes. Transcendence of time and survival is a must to these models work. It is not a technical problem it is a consciousness and identity transcendence problem. 

Share this post


Link to post
Share on other sites

@zurew Like I said, if you want to share a serious, specific plan for how to move our governments to Game B, I will listen. I have not really heard such a thing yet. I am not interested into poo-pooing it if it has a chance of working.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
10 hours ago, zurew said:

Having only a rational calculting brain, without any instinct for survival = death. If you think that having an instinct to survive is an emotion, then sure frameing that way your argument could be correct, but by that framework you are arguing for something that necessarily lead to extinction and death, because without having an instinct for survival, why would anyone or anything want to maintain their survival?

The instinct for survival is a set of behaviors driven by emotions and feelings.  When you fear, you flee or freeze.  When you have rage, you fight.  When you feel pain you withdraw.   The feeling brain uses the left-brain like a tool to execute strategies in accordance with its will.   A robot can be programmed to defend itself, but this is not an instinct for survival, it is an algorithm.  The robot is no more sentient than a hammer.    


Vincit omnia Veritas.

Share this post


Link to post
Share on other sites

If we look at this from a Spiral Dynamics perspective, this is toxic Orange. The solution is to move into Stage Green. 

Become a New-Age hippie, go out into nature, get rid of your smartphones, get into a circle, sing Kumbaya and form intentional-communities that are based on natural holarchies. Such as plant-based agricultural intentional-communities. 

Stop depending on the system for everything. For money, for food, for water, everything. Get into decentralized money, get your own guns and protect your shit. And get rid of all government. No government, no AI. Cuz no big corporations getting that leverage over the masses. So, no incentive to do this shit. 

And finally, there needs to be a grassroots-movement of scientists, starting with the education-system to combat the AI itself. That will make or break our chances of containing AI and using it for our benefit. 

Share this post


Link to post
Share on other sites

In before we get lazy and plug AI into the grid and ask it to "solve world hunger and climate change", then it nukes everything because nobody can be hungry if they're dead and the world would be instantly colder for decades to come with a global nuclear winter.

xD

If we obsess about efficiency too much this could be a real outcome.

Edited by Roy

hrhrhtewgfegege

Share this post


Link to post
Share on other sites
20 hours ago, Leo Gura said:

I think the biggest mistake these Game B theorists make is underestimating the resilience of human civilization. Human civilization is the most anti-fragile thing on this planet. That's my guess. But I could be wrong.

I have the same feeling concerning the anti-fragility of human civilization.

Strange that this is not seen more by more people. Probably lots of people like to be busy saving "fragile" systems.

Just when one looks how societies are able to generate complex stories/myths for survival, and are able to exclude huge amounts of contradicting information to keep their values/stories going .... seems like the 2-legged-ape was designed for that.... 9_9

We will get to see a large part of the solution of the Fermi-Paradoxon in our lifetimes (is AI causing the universal silence in the universe, or not). I have the feeling that it is not caused by AI, but who knows....

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now