erik8lrl

AGI is coming

187 posts in this topic

 

11 hours ago, Scholar said:

AI is precisely the opposite. It cannot do anything without data. This is because machine learning has nothing to do with intelligence in this sense, it is probabilistic, stochastic parroting. It is more akin to intuition than anything else. You could give AI photorealistic images of all objects in the universe. And it would be great at depicting those objects, in photorealism. It could never move beyond that, because in the AI, there is nothing beyond the data.

I think it's important to point out that we don't really fully understand how intelligence emerges from LLMs yet. Yes, on the surface it's just predicting the next word from a massive neural network. But as we increase the size of these neural networks, the LLMs start to become more and more "intelligent" through emergent processes. And no one actually knows how this process happens, due to the complex nature of these neural networks. I don't think we can even 100% be sure that these models are not conscious. Yes, they don't behave with a sense of self, and you can program the system to be limited and modify the behavior of the AI. However, this emergent property of large neural networks might be the very early basis for developing Qualia. We are only at the beginning of this development, and it already exhibits near-human-level intelligence in some areas, we don't know where this will lead as we scale up. For example, GPT3 has 175 billion parameters. These parameters can be thought of as the connections between artificial neurons, rather than the neurons themselves, so they are more like synapses. The human brain is estimated to contain approximately 100 trillion synapses, so the human brain is 571 times the size of GPT3, who knows what LLMs would be if it reached the same size as humans. Of course, this won't happen in a long time. But due to the accelerating nature of this tech, even if it doesn't reach qualia, the impact it will have on humanity will be significant. Google Gemini Ultra was announced last December and it has 540 billion parameters. This means parameter size grew 3 times in 3 years, we are only at the start of the exponential curve and we don't even know how many parameters GPT5 will have, which is coming this year.     

And also you can say the same for humans or any living being, we also need data/input to develop any form of intelligence. We won't be able to do anything without our senses interacting with the world. Even creativity, is often the process of interaction between existing ideas and inputs. AI art today can totally generate creative and novel artworks, artworks that are not drawn from any single data but can be synthesized from a massive amount of different data to realize the wholeness of an idea. Of course, the AI themselves don't have a self or will to create yet, but humans can already produce extremely creative art by using AI. If AI has a self, I don't see why it can't be creative and artistic. But yeah, to develop Qualia is a far and unknown road down the future.       

Edited by erik8lrl

Share this post


Link to post
Share on other sites
9 hours ago, RightHand said:

@zurewNoob prompter spotted ^_^

Lol yeah, prompting is super important for drawing intelligence out of LLMs, at least for now. Because LLMs are not super good at general understanding yet, their intelligence is often obscured under semantic context for most general users. The context you set for an LLM can drastically change its apparent level of intelligence. For example, if you set a condition during the prompt for the LLMs to answer from, they will narrow their intelligence and perspective. Telling them to exhibit a perspective is one example of this: "Pretend that you are an (insert perspective) from now on when answering my questions" as a prompt would narrow their range of data connection, and make them more intelligent in certain subjects or domains. 
"Pretend that you are an Enlightened being from now on when answering my questions"  
"Pretend that you are the best neural surgeon in the world from now on when answering my questions"  
"Pretend that you are the best AI researcher from now on when answering my questions"  
Etc...

There are also custom GPTs, GPTs that are trained proprietary data from other users, to achieve expertise in different fields. 

Share this post


Link to post
Share on other sites

https://x.com/DrKnowItAll16/status/1758963126131655036?s=20
https://x.com/elonmusk/status/1758970943840395647?s=20


This is a good analysis connecting Sora's research direction with Tesla's self-driving research direction. They are both doing the same process of world simulation and generation. And his point about AGI not being agents is exactly my point, early AGI will develop a level of general intelligence without agency/self/consciousness, then it will be implemented into embodied AI/robotics, and then in the far far future, we might or might not get to conscious agents.     

Share this post


Link to post
Share on other sites

A good technical breakdown of emergent property. And how much we don't know about neural networks. 

Edited by erik8lrl

Share this post


Link to post
Share on other sites
13 hours ago, RightHand said:

@zurewNoob prompter spotted ^_^

The stage yellow AI needs to be exceptionally and intelligently prompted? :/

---------

But yeah, I know sometimes it can do good things (if you figure out what prompt can work). But it proves my earlier point about the problem that currently it doesn't really know the semantics of things - it only remembers patterns - and once you change that pattern(in this case the prompt) a little bit (in a way where the meaning is essentially the same), it falls apart and fails to apply the right pattern.

 

Edited by zurew

Share this post


Link to post
Share on other sites

Well this looks more promising than crpyto in many ways for sure, although it's a different domain, but most people are using crypto like stocks, we can already see where A.I is changing the game already in crazy speeds..

Edited by GabeN

Share this post


Link to post
Share on other sites
1 hour ago, zurew said:

The stage yellow AI needs to be exceptionally and intelligently prompted? :/

---------

But yeah, I know sometimes it can do good things (if you figure out what prompt can work). But it proves my earlier point about the problem that currently it doesn't really know the semantics of things - it only remembers patterns - and once you change that pattern(in this case the prompt) a little bit (in a way where the meaning is essentially the same), it falls apart and fails to apply the right pattern.

 

Have you tried GPT4? With GPT3 this problem is obvious. But with GPT4, it's already much better. The AI is not just applying the right patterns. The fundamental function of a neural network is that it's not linear, none of the responses are hardcoded, instead they are all results of emergent processes. This is why they can synthesize novel responses too if you ask them to. Of course, prompting is important if you want to get the best result possible, but with GPT4, it's already at a level of semantic understanding that generally speaking, anything you give it it will understand. This will only improve as it develops, I would expect some major breakthroughs in LLMs this year, as we have already seen with Google's Gemini and soon GPT5. 
I don't think people understand how big of a deal Gemini's 1 million context window is, it basically means it can remember and reason with a large/holistic and long-term understanding of any problems. For example, it will improve AI's ability to code dramatically, since it can understand and remember an entire code base/system structure as a whole, to then be able to write code that is within context and best for the system it's trying to build. 

Edited by erik8lrl

Share this post


Link to post
Share on other sites
36 minutes ago, erik8lrl said:

Have you tried GPT4?

Yes and it fails answering trivially easy questions that a guy in elementary school could answer.

It makes zero sense to say that it has a semantic understanding of things and at the same time it fails giving the right answer for trivial questions.

 

Yes, sometimes it can provide the right answer to more complex questions, but if it would actually have a semantic understanding - it wouldn't fail answering the trivial questions - therefore , I will say it again - it only deals with patterns and doesnt understand the meaning of any thing.

Right now you could do this with me:

Give me a foreign language that I understand literally nothing about - in terms of meaning of sentences and words - and then give me a question in that foreign language and the right answer below. If I memorize the syntax (meaning, if I can recognize  which symbol comes after which other symbol) then I will be able to give the right answer to said question even though I semantically understand nothing about the question nor about the answer - I can just use the memorized patterns.

The AI seem to be doing the exact same ,except with a little twist that it can somewhat adapt said memorizedpatterns and if it sees a pattern that is very similar to another pattern that it already ecountered with in its training data, then - in the context of answering questions - it will assume the answer must be the exact same or very similar to it, even though changing one word or adding a , to a question might change its meaning entirely.

 

Here is one example that demonstrates this problem

Quote

https://amistrongeryet.substack.com/p/gpt-4-capabilities

To explore GPT-4’s reliance on known patterns, I gave it this classic logic puzzle:

Here is a logic puzzle: I need to carry a cabbage, a goat, and a wolf across a river. I can only carry one item at a time with me in the boat. I can't leave the goat alone with the cabbage, and I can't leave the wolf alone with the goat. How can I get everything the other side of the river?

This puzzle undoubtedly appears many times in its training data3, and GPT-4 nailed it (the complete transcript is posted here). However, with some prodding, we can see that it is leaning heavily on a memorized solution. Stealing an idea I saw the other day, I tweaked the puzzle so that the cabbage, rather than the goat, is the critical item4:

Here is the tweaked logic puzzle: I need to carry a cabbage, a goat, and a wolf across a river. I can only carry one item at a time with me in the boat. I can't leave the goat alone with the cabbage, and I can't leave the cabbage alone with the wolf. How can I get everything the other side of the river?

GPT-4 gave the same answer as for the classic puzzle, beginning by taking the goat across the river. That’s incorrect, because it leaves the cabbage alone with the wolf, which is against the rules for this variant. In the revised puzzle, you need to take the cabbage first.

 

Share this post


Link to post
Share on other sites
5 minutes ago, zurew said:

Yes and it fails answering trivially easy questions that a guy in elementary school could answer.

It makes zero sense to say that it has a semantic understanding of things and at the same time it fails giving the right answer for trivial questions.

 

Can you tell me what question you asked? Just curious. 

Of course, the models are not even close to perfect. If it can generalize perfectly to everything then we would have AGI. But this space is developing fast, try it again when GPT5 comes out. 
Also even if it doesn't have the ability to solve puzzles or its semantic understanding is off and misses details, it can still be good and useful for many different applications. It all depends on how you use it. 

Share this post


Link to post
Share on other sites
27 minutes ago, erik8lrl said:

Can you tell me what question you asked? Just curious. 

Of course, the models are not even close to perfect. If it can generalize perfectly to everything then we would have AGI. But this space is developing fast, try it again when GPT5 comes out. 
Also even if it doesn't have the ability to solve puzzles or its semantic understanding is off and misses details, it can still be good and useful for many different applications. It all depends on how you use it. 

The problem is that the models have already processed far more data than any human being ever has. Emergent properties are not necessarily a sign of intelligence, but can simply be a sign of good intuition. And this is precisely what this technology simulates, intuition and learning.

While learning and intuition are part of what we consider general intelligence, a far more fundamental component is lacking: Individuated consciousness.


Glory to Israel

Share this post


Link to post
Share on other sites
40 minutes ago, Scholar said:

The problem is that the models have already processed far more data than any human being ever has. Emergent properties are not necessarily a sign of intelligence, but can simply be a sign of good intuition. And this is precisely what this technology simulates, intuition and learning.

While learning and intuition are part of what we consider general intelligence, a far more fundamental component is lacking: Individuated consciousness.

AI might have processed far more data than the average human on a specific topic, but data alone is not what makes the emergent properties appear. It is the number of parameters that a model has that determines it. Parameters are like the number of connections/synapses in a brain. AI currently has 500 times less connections/synapses compared to humans. In my opinion, judging by my experience with LLM and other neural networks, it seems obvious that as the number of parameters increases, the models become more "intelligent" and the degree of generalization it's capable of increases. This opinion could be wrong since we don't truly know how a neural network of this scale/or even our brain works yet. I would define generalization ability as the ability to make connections and recognize patterns, the more connections and patterns you can recognize, the more general and intelligent you are. Which is what we are seeing happening with LLMs.  
I think consciousness could be a by-product of this emergent process, once the number of connections reaches a certain amount of complexity and interconnectedness. We don't know for sure of course. But I think LLMs are the early starting point of Qualia, which yes, if achieved, will truly bring general intelligence.  

Edited by erik8lrl

Share this post


Link to post
Share on other sites
1 hour ago, erik8lrl said:

it can still be good and useful for many different applications. It all depends on how you use it. 

Yeah I agree that it can be useful for multiple different things , but I tried to show some of things tthat I and others have recognized regarding its semantic understanding and its reasoning capability.

1 hour ago, erik8lrl said:

Can you tell me what question you asked? Just curious.

The article I linked shows many examples ,but here is another one with many examples:

https://medium.com/@konstantine_45825/gpt-4-cant-reason-2eab795e2523

I recommend to check this one as well:

https://amistrongeryet.substack.com/p/gpt-4-capabilities

 

This is also interesting as well:

Quote

Here’s a Twitter thread where Eric Hallahan finds that when given questions from the “Codeforces” programming competition, GPT-4 “solved 10/10 pre-2021 problems and 0/10 recent problems”, suggesting that its performance was due to having seen the older problems solved (or at least discussed) somewhere in its training data.)

 

 

Edited by zurew

Share this post


Link to post
Share on other sites

AGI isn't going to come anytime soon and I don't even see why we would need it.

We could simply use Agents to do specific tasks. That's a much better case for AI.

Also AI isn't going to such in real world tasks like driving cars or bees. But it's going to excel in digital work. Very interesting

 

Edited by Bobby_2021
Grammar mistakes

Share this post


Link to post
Share on other sites
1 hour ago, zurew said:

Yeah I agree that it can be useful for multiple different things , but I tried to show some of things tthat I and others have recognized regarding its semantic understanding and its reasoning capability.

The article I linked shows many examples ,but here is another one with many examples:

https://medium.com/@konstantine_45825/gpt-4-cant-reason-2eab795e2523

I recommend to check this one as well:

https://amistrongeryet.substack.com/p/gpt-4-capabilities

 

This is also interesting as well:

 

Very cool! Yeah, it's not perfect, we'll see how it develops. 

Share this post


Link to post
Share on other sites
1 hour ago, Bobby_2021 said:

AGI isn't going to come anytime soon and I don't even see why we would need it.

We could simply use Agents to do specific tasks. That's a much better case for AI.

Also AI is going to such in real world tasks like driving cars or bees. But it's going to excel in digital work. Very interesting

 

Yes, I think it's more so an AGI that can generalize most human knowledge without a conscious agent. Such AI could democratize intelligence/expertise, which will impact society greatly. 
Imagine anyone having a life coach AI on the same development level as Leo lol. Or any other form of expertise.   

Edited by erik8lrl

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now