erik8lrl

AGI is coming

187 posts in this topic

34 minutes ago, undeather said:

The decision tree underlying just one , very standard traffic situation is basically infinite - and I don't see how computing power will deal with this problem even theoretically.

No more complex than a flying bee.

Making an flying bee AI would be much harder than a self-driving car.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

David Shapiro gives me conman vibes. He quit his job and now needs to hype up AI as much as possible if he is relying on his youtube channel now. I hope I'm wrong and I hope David is correct about his AGI predictions. 

Share this post


Link to post
Share on other sites
1 hour ago, DocWatts said:

I'd disagree with this slightly. While it's true that a bee's mind doesn't use symbolic processing, I'd instead argue that bees are on a spectrum of general intelligence, along with people.

The most sophisticated AI that we have still can't come close to replicating all of the things that a bee can do.

I agree, we are nowhere near simulating the level of intelligence of real living beings, and that might not happen in 50 or even 100 years. 
What I mean by AGI in this instance is more practical. An AI that can solve novel and general problems better than an average human would be my definition of AGI. It doesn't have to have the same complexity that an organic being has for survival.  Heck, it doesn't even have to know how to drive cars. I think self-driving agent is a very difficult general problem, the AI would have to reach the level of conscious being in order to truly not make any mistakes, since the range of problems/situation you could run into is near infinite due to the complexity of reality. 

Edited by erik8lrl

Share this post


Link to post
Share on other sites

@Seth

@Phil King 

Yeah, I saw this video randomly. He is clearly exaggerating by a lot. But the acceleration rate of AI is definitely alarming, which is the main point of this post.

 

Share this post


Link to post
Share on other sites
1 hour ago, Leo Gura said:

No more complex than a flying bee.

Making an flying bee AI would be much harder than a self-driving car.

It took nature millions of years to create bees 

 

The timeline from the first prototypes of self-driving cars to achieving Level 5 autonomy is reasonably within 10 to 30 years from now (assuming linear progression) 

Share this post


Link to post
Share on other sites
23 minutes ago, RightHand said:

It took nature millions of years to create bees

This needs to be emphasized more. 

We've been  trying to create in a laboratory over the course of a few decades what took hundreds of millions of years to develop through natural selection.

Add to that that a science is still very far from understanding how consciousness works and how life emerged from non-living material, and it would behoove us to approach these claims the topic with more skepticism and humility.

Edited by DocWatts

I'm writing a philosophy book! Check it out at : https://7provtruths.org/

Share this post


Link to post
Share on other sites

Regardless of AGI and what even constitutes AGI, We are creating things that have shown emergent behavior (gpt wasn't trained on research-grade chemistry but one day it just could do it). We don't know what they are capable of at any given time. We don't know the timeline, We don't know what comes next, and we aren't acting considerately when it comes to new tech and its externalities.

Share this post


Link to post
Share on other sites
3 hours ago, undeather said:

The decision tree underlying just one , very standard traffic situation is basically infinite - and I don't see how computing power will deal with this problem

Bias. How do YOU make ANY decisions considering that there are an infinite amount of variables at disposal at any given time? Your mind basically needs to seperate relevant information from irrelevant information to reduce the processing cost, which it does through bias.


beep boop

Share this post


Link to post
Share on other sites

The term AI is a misnomer because, fundamentally, the AI is doing the opposite of what we consider to be intelligence.

 

A human being, given enough time, can know nothing about math at all and develop all of math from the ground up, simply by analyzing reality, and simply by analyizing their mind.

A human being, given enough time, can go from no artistic expression to developing all the artistic expressions we see currently.

A human being, given enough time, can create language itself, can create new concepts, new words, new ideas, without ever having seen and heard of any of them. Given enough time, a human being could create all possible words, all possible concepts that can exist within the reality of his mind.

 

 

AI is precisely the opposite. It cannot do anything without data. This is because machine learning has nothing to do with intelligence in this sense, it is probabilistic, stochastic parroting. It is more akin to intuition than anything else. You could give AI photorealistic images of all objects in the universe. And it would be great at depicting those objects, in photorealism. It could never move beyond that, because in the AI, there is nothing beyond the data.

 

This is the fundamental reason why AI is not in the same way intelligent as a human mind:
A human mind does not simply come to intelligent conclusions, a human minds understands why the conclusion is correct. Why? Because the conclusion and the process is part of their being. The idea of "addition" and "substraction" exists in a human mind, it does not exist in a calculator. Calculators do not do math, they calculate. Logic exists in the human mind, as an actual substance of existence. There is no computational system that contains logic, it simply can attempt to mimic the dynamics of logic.

In the same way, no computational system has a sense of appeal, because appeal and beauty is actually a substance of existence. It is actually something that exists in the human mind, and it relates to other part of the human mind, which are other, actually existing substances of reality.

 

In other words, experience is essential to general intelligence. Because general intelligence simply means being conscious, being individuated. The more substances of existence, and interrelation between them, a mind can contain, the higher it's potential for "general intelligence" is.

 

 

Now, this doesn't mean AI cannot achieve great things. It is basically machine evolution. It should be able to achieving anything that the human mind does unconsciously. This is why image generation is possible, it is much like human imagination. When you think "apple", you don't consciously image that apple. You don't construct it, it comes to you as you intend it.

The same is true for thoughts. You don't come up with your own thoughts, you don't think them. It's not intelligent to have thoughts, what's intelligent is you realizing what the thoughts mean, what they are, and how they relate to the rest of existence.

 

The AI does not understand why poetry is poetry, it simply learns to mimic it. There is no poetry in the machine, the poetry only exists in the human mind, as he reads it, as the words form a new substance of existence.

 

 

I suspect genuine AGI will not happen until we create physical artifical evolutionary systems. Individuating consciousness is essential for this, and this will have to happen on a physical basis, in the same or similar manner as the brain does.

Edited by Scholar

Glory to Israel

Share this post


Link to post
Share on other sites

@Scholar The discussion isn't wether or not AI could obtain qualia.

 

I personally don't care if GPT-87 is a philosophical Zombie, I just want it to be smarter than me.

Share this post


Link to post
Share on other sites
7 minutes ago, RightHand said:

@Scholar The discussion isn't wether or not AI could obtain qualia.

 

I personally don't care if GPT-87 is a philosophical Zombie, I just want it to be smarter than me.

You are missing the point. If you aren't conscious, you aren't "Generally Intelligent", you simply have intelligent functions.

 

If you want it to be smarter than you, just hit yourself on the head real hard. That will fix your problems.


Glory to Israel

Share this post


Link to post
Share on other sites

@Scholar I get what you are saying but I don't think we are using the same definition of AGI. 

When discussing AGI, the focus is on the functional and cognitive abilities of an AI system rather than on replicating consciousness. As long as your "'intelligent functions" allow you to solve any problem we're good to go, and if they can't you just make new ones with your existing set.

Share this post


Link to post
Share on other sites
8 minutes ago, RightHand said:

@Scholar I get what you are saying but I don't think we are using the same definition of AGI. 

When discussing AGI, the focus is on the functional and cognitive abilities of an AI system rather than on replicating consciousness. As long as your "'intelligent functions" allow you to solve any problem we're good to go, and if they can't you just make new ones with your existing set.

You can get far with parroting, memory and intuition, but I suspect there will be a lot of hard-lines that will be impossible to cross.

 

The danger here is, of course, that you will get intelligence without consciousness. There is nothing more destructive than intelligence that lacks consciousness. It's kind of ironic, because the last world war was caused precisely by this kind of dynamic.

Machine learning has the potential to give power to the least conscious of individuals. Would have been nice if the nazis had been a little less smart and had less "intelligent functions". The kind of potential for destruction now possible will pale in comparison to what we they had been capable of.

Edited by Scholar

Glory to Israel

Share this post


Link to post
Share on other sites

@Scholar If we use the metaphor of Adam and Eve eating from the tree of the knowledge of good and evil, can we not envision a scenario where AI, having accumulated a vast array of intelligent functions, experiences an AHA moment that instantaneously grants it consciousness?

 

Maybe using artificial DMT :D

Share this post


Link to post
Share on other sites
10 minutes ago, RightHand said:

@Scholar If we use the metaphor of Adam and Eve eating from the tree of the knowledge of good and evil, can we not envision a scenario where AI, having accumulated a vast array of intelligent functions, experiences an AHA moment that instantaneously grants it consciousness?

 

Maybe using artificial DMT :D

I think that is confusing what individuated consciousness is. It is a physical thing, a specific shape within the wavefunction of the universe. Computers aren't anything like that shape, so they will not be individuated.

It's not the AHA moment that grants consciousness, it's the other way around.

Edited by Scholar

Glory to Israel

Share this post


Link to post
Share on other sites

The AI hype these days is exhausting.


"Find what you love and let it kill you." - Charles Bukowski

Share this post


Link to post
Share on other sites

@Space A year ago, GPT-4 hadn't even been released. Now, you can talk with a stage yellow entity whenever you want, 24/7.

Edited by RightHand

Share this post


Link to post
Share on other sites
4 minutes ago, RightHand said:

Now, you can talk with a stage yellow entity

lol

Share this post


Link to post
Share on other sites
6 minutes ago, zurew said:

lol

We need a facepalm emoji. @Leo Gura


Glory to Israel

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now