Leo Gura

Should We Shut Down All AI?

276 posts in this topic

46 minutes ago, Nilsi said:

Musk is already doing it with Neuralink. I think they start human trials this year.

That has nothing to do with an autonomous AI.

44 minutes ago, Snader said:

We might not be too far away from that considering the AI’s fast development and its already visible signs of innovative capabilities.

The whole point of AI is to free the brain from the limits of the skull.


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

Let’s think about this from a technical perspective. The only way we’re going to have a runaway intelligence effect is if the AI has the capability to improve itself. But these large language models will not be able to improve themselves, since they are limited by the aggregate of human knowledge - and they have to go in and tweak the weights of the neural net if they are to improve.

But that can’t be done, because first of all, since they’re trained on a data set of human output, so they’re bound by human knowledge, so they can’t be made to predict outcomes from adjustments  of the composition of a large complex neural net - and secondly, even if you would randomly start to adjust the weights, you’d have to run that new version of the LLM in some kind of simulation sandbox (otherwise you’d just saw off the branch you’re sitting on ) and evaluate against some fitness criteria. But the only fitness criteria for intelligence is survival in an given environment, and thus you’d have to simulate the real world in the simulation sandbox so that the mutated instances of the language models would have a comparable environment to survive in.

The environment in the sandbox would necessarily have to emulate ours because otherwise the “improvements” to the weights could never constitute anything we would consider intelligence, or could make impactful changes in our world. And here’s why i’m not worried about AI, at least not yet: Constructing a sandbox environment that simulates our physical world is as difficult as constructing the intelligence by hand. The degree to which we can simulate our real world inside a sandbox is the degree to which you can in principle have an intelligence rapidly evolve in it. But we’re rate limited in building a physical simulation. That’s not something we can let the computers do for us. You would have to hand build it. That’s why we can’t have a runaway intelligence effect.

Edited by anaj

Share this post


Link to post
Share on other sites
32 minutes ago, Something Funny said:

shouldn't intelligence be correlated with how loving you are? Do you think that this is not going to be the case with AI?

It's hard to ensure that. I think there are many different kinds of intelligence. These programmers could accidentally make a psychopathic intelligence. It wouldn't be the highest intelligence but it could still outsmart humans.

It's just impossible to predict how this thing will behave once it grows large enough.

We know for a fact that psychopathic intelligence exists.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
Just now, Leo Gura said:

That has nothing to do with an autonomous AI.

The bioshphere (or Gaia, if you want) is an autonomous superorganism and is simultaneously acting through individual organisms.

There will be incorporeal AGI, but you will be jacked into it and partake in it.

Its no different from how God works, really.


“Did you ever say Yes to a single joy? O my friends, then you said Yes to all woe as well. All things are chained and entwined together, all things are in love; if ever you wanted one moment twice, if ever you said: ‘You please me, happiness! Abide, moment!’ then you wanted everything to return!” - Friedrich Nietzsche
 

Share this post


Link to post
Share on other sites
4 minutes ago, Leo Gura said:

The whole point of AI is to free the brain from the limits of the skull.

And the problem is that we are not ready to handle what’s beyond the limits.

Share this post


Link to post
Share on other sites

If only there was a way to implement a high amount of empathy, open mindedness and compassion into the literal architecture that these systems will be built on.

As for regulation, regulators have consistently shown that they don’t even understand how wifi or facebook work. What are they supposed to do?

Principles and ethics always get in the way of profits. Everyone knows that, and that’s a problem.

Edited by MarkKol

Share this post


Link to post
Share on other sites

Most people think only in the span of 100 years but this AI can't be stopped anymore.

How will the world look like in 500 or 1000 years? AI will replace humanity and that will just be the natural progression of things.

In the same way homo erectus got replaced we homo sapiens will be replaced by AI too. We can delay this perhaps for 100 years but not for 1000 years.

Edited by StarStruck

Share this post


Link to post
Share on other sites
2 minutes ago, StarStruck said:

 

Most people think only in the span of 100 years but this AI can't be stopped anymore.

How will the world look like in 500 or 1000 years? AI will replace humanity and that will just be the natural progression of things.

In the same way homo erectus got replaced we homo sapiens will be replaced by AI too. We can delay this perhaps for 100 years but not for 1000 years.

 

That’s a good point to keep in mind, but I wouldn’t necessarily call it natural, as unlike in the case of evolution we have a choice to consciously affect the process. 

Share this post


Link to post
Share on other sites

Satya Nadella, according to recent news, has imo proved he’s treating this technology irresponsibly. He (along with others) apparently pressured the rest of Microsoft into developing Bing search/Chatgpt ASAP.

Large investments such as Microsoft’s $10 billion investment in OpenAI should be prohibited. You are not developing just a product, don’t say “we made Google dance”.

This guy is completely blinded by capitalism and dumb altruism.

Edited by MarkKol

Share this post


Link to post
Share on other sites
18 minutes ago, Snader said:

That’s a good point to keep in mind, but I wouldn’t necessarily call it natural, as unlike in the case of evolution we have a choice to consciously affect the process. 

Good luck against AI then.

Share this post


Link to post
Share on other sites

@Leo Gura Good interview to watch aswell: 

 


"Find what you love and let it kill you." - Charles Bukowski

Share this post


Link to post
Share on other sites

He makes a very strong case, unfortunately I think we are too late to stop the train now. For example intelligence agencies and militaries probably develop their own cyber-warfare systems that include AI, this would cause a classic prisoners dilemma causing neither country to give up its AI development.

On another note, I think it would set a terrible precedent to pause innovations just because they have major implications and humans haven't wrapped their minds around them. Most human's haven't wrapped their minds around the implications of eating at McDonald's let alone Superhuman AI. The thing is this guy might actually be right, but then again (conservative) people have had doomsday prophesies about the state of affairs after a huge innovation since forever so god only knows if with this AI stuff humanity will get wiped out or we'll achieve something near a utopia. To me this is the inherent risk in innovation so at this time I'm personally okay with letting this AI development run its course and see where the ship strands.  

Of course we should monitor closely AI innovation as who knows one day it might do something like this to humankind maybe for an equally silly reason:

 

Share this post


Link to post
Share on other sites

I think AI will have a positive impact on society

Share this post


Link to post
Share on other sites

I just remembered a parable that I was taught at school. So, a cat decides to take care of a lion cub whose mother has died. She teaches everything to the cub except climbing a tree just in case.

Well, I believe we need to come down to earth from toxic orange values that are blinded by success and money, as well as from a naive liberal approach towards unlimited innovation and development.

Unfortunately, the general public, especially in my country, seems to have little or no information about what's going on in the technological field. We are too preoccupied with the war between Russia and Ukraine, and any topic other than the war seems insignificant to them, as I have observed.

I don't know how, but I think we need to involve more people in the AI discussion. My parents, elders, and most people around me who are not really into technology are in an information vacuum. It seems that the future of the world's population is being decided by a little bubble containing some leading business people, AI scientists, and people interested in technology

Share this post


Link to post
Share on other sites

My friend who is doing his PhD in AI is deeply worried. Anyone underestimating the risks of AI is pretty naive thinking they know better than world-class leaders in AI.

Edited by tezk

Share this post


Link to post
Share on other sites

Yes I pretty much think that AI that is developed and used in our current system, with arms races will lead to a very bad outcome for everybody. It seems like the intelligence of an AI system is not really wholistic and loving but rather narrow and focussed on optimizing a few metrics. Higher intelligence does not mean more Wisdom/More Loving. The clearest thoughts on this I found in the podcast from Center for Humane Technology 

 https://www.humanetech.com/podcast/the-ai-dilemma

and the recent Daniel Schmachtenberger Video about it: 

 

Share this post


Link to post
Share on other sites

While chat GPT is very impressive and highly sophisticated AI algorithms have potential to cause massive disruptions to our society, let's be absolutely clear on one thing: we're nowhere close to being able to build an AGI (artificial general intelligence). By AGI I specifically mean intelligence that's able to set its own goals autonomously, and that's able to understand the meaning of what it's manipulating.

Anyone who believes that Chat-GPT is on the verge of human level intelligence is just mistaken on this count.

The problems and danger comes from AI that is hyper tailored for specific domains (such as targeted advertising) being used in malicious ways or in ways that lead to unintended consequences. While I don't think it would be a bad idea to put a pause on this type of research, the reason for doing so is not because we're on the verge of being able to create a human or superhuman level intelligence.

The reason for this is that the type of intelligence that makes humans and other animals so flexible and adaptable works on axiomatic principles that are incompatible with deterministic rule based axioms that digital computers use. As such digital computers have always been a bad metaphor for how minds work.

As far as why we are nowhere close to an AGI, I'll grab a post I wrote on another thread, but basically it's a summary of the problems facing AGI as outlined by John Verveake and the philosopher Hubert Dreyfus, both of whom use an understanding of phenomenology and of the embodied aspects of mind to dissect the problem.

________________________________ 

The gist of it is that Reality is disclosed to human beings in a way that what's relevant about a situation we're absorbed in tends to be immediately apparent without us having to apply rules. The reason that is so is that having a body with needs requires a practical ontology (an understanding of Being) for the purposes of survival, where what Reality *is* on an experiential level is going to be coupled to what kind of creature one is.

'Being' in this context referring to our pre-reflective, nonconceptual understanding of people and objects. Being is the most foundational way we're able to understand a tree as a tree, a human face as a human face. It's what allows what we come across to be meaningful for us, and is pre-supposed by other forms of understanding. It's also what allows us to make our most  fundamental type of discernments within an undifferentiated Reality, and to do so effortlessly.

When we do step back and refer to rules, it tends to be because our normal ways of skillful coping have become disrupted (such as when you run into a highly novel or unexpected situation) or when one is an absolute beginner in some domain.

Digital computers operate on different axiomatic principles than living organisms, and need to use deterministic rules to interact with their environments. The problem with using rules to try to determine what's relevant is that you also end up needing rules to apply the rules, then rules to apply those rules, ad infinitum.

This presents an intractable problem for AI because determining which of the innumerable features of one's environment are relevant for a particular purpose comes from a capacity for Care, not from applying rules. 

Organisms including human beings do not have this problem because our experience of Reality comes pre-structured so that what's relevant for our interests and purposes tends to be immediately obvious. Which is the reason why most of what you accomplish in your day to day life (walking down the stairs, brushing your teeth, recognizing faces, etc) is done almost effortlessly, without relying on any rules.

Believing that AI algorithms like chat GPT are on the cusp of AGI is the equivalent of thinking that you're making tangible progress towards reaching the moon because you've managed to climb halfway up a very tall tree. There are intractable problems here due to an incompatibility between the axioms of digital computers and how intelligence in humans (and other animals) works.

So I'm not saying that AI isn't a problem worth taking seriously, but it's important to be clear about what sort of problem we're actually facing.

The-explanatory-congruence-of-relevance-realization-and-general-intelligence-THIS.png

Edited by DocWatts

I'm writing a philosophy book! Check it out at : https://7provtruths.org/

Share this post


Link to post
Share on other sites

I think the development of AI is difficult to stop now, because of the competition of different companies and especially different countries. Of course this thing can cause lots of problems, when everyone aims to be better that others in it.


Love is the truth, love, love, love.❤️

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now