thenondualtankie

AGI is coming (megathread)

10 posts in this topic

There seems to be indicators that we're gonna get Artificial General Intelligence (by any definition you want) in the coming few years. This means AI that is as capable as the best humans (or better) in any intellectual domain. For example AI that is as good as the best software engineer.

Here is a really great systems thinker whose focus is on AI and AGI:

 

Share this post


Link to post
Share on other sites

I just want to point out that "just as good or better, in any intellectual domain" is a very broad statement. And can even be a misleading statement as well.

When we speak of AI as being capable, we typically lump together intellectual capacity (knowledge) with intellectual freedom. AI is a pattern machine that is limited to whatever has been discovered and acknowledged as important before it's calculations even begins.

Since the AI lacks its own sentience and therefore has no will on its own. The results that it comes up with, will merely result in a wide range of variations of some sort. But AI is not capable of original ideas without major  human intervention. So we will not be able to see any new ideas like those Einstein brought us, coming from an AI. Even though in theory, it should be more than capable with its massive computing power.

An AI can at best, replicate knowledge inorder to get from point A to B and so on, but there is no original source of understanding that any knowledge of an AI rests in. It's main source of "understanding" comes from specific targeted goals.

Some may think this is a negative assessment of AI but my main point is not about hating on AI. My main point is to adress common assumptions about AI that will stubbornly cling as truths before being looked at from a more in depth philosophical point of view. Even alot of people in the tech space view their own sense of thinking as that of a robot/machine, and while they may be the leading experts in their field. Something to keep in mind here, is that they may not be leading philosophers at the end of the day. 

So while AI may bring in alot of new results, it may not necessarily bring about alot of new understanding.

Share this post


Link to post
Share on other sites

I've had similar thoughts too (AI will never be able to create something actually *new*), but we simply do not know.

GPT-4 currently has some small ability to create novel things, for example poems.

There's a term in the AI space called 'grokking', where an AI model suddenly learns an entirely new skill without being prompted to do so. For example ChatGPT learned how to translate text between different languages. This ability was gained suddenly once enough training had been done.

Share this post


Link to post
Share on other sites

If anyone is interested enough in this topic for 5-10 mins of reading, a section of the philosophy book I'm writing deals with some of the difficulties in creating an AGI.

The tldr version is that artificial intelligence doesn't actually understand anything, and a capacity for understanding isn't something that can just be programmed into a computer. Rather, understanding is an embodied process that relies on Reality having consequences for the being in question. Unlike living beings, AIs have no 'skin in the game' as far as thier interactions with Reality. Living beings and digital computers are organized around very different axiomatic principles, so it's an open question as to whether or not a capacity for understanding can be replicated in a disembodied AI.

______________________________

 

What Artificial Intelligence Can Teach Us About Minds

As of the time of this book’s writing in 2023, machine learning algorithms such as ChatGPT have advanced to the point where their responses to questions can correspond to an impressive degree with how human beings use written language. ChatGPT’s ability to incorporate context in conversationally appropriate ways makes interacting with these models feel uncannily natural at times. Of course, training an AI language model to interact with humans in ways that feel natural is far from an easy problem to solve, so all due credit to AI researchers for their accomplishments. Yet in spite of all this, it’s also accurate to point out that artificial intelligence programs don't actually understand anything. This is because understanding involves far more than just responding to input in situationally appropriate ways. Rather, understanding is grounded in fundamental capacities that machine learning algorithms lack. Foremost among these is a form of concernful absorption within a world of lasting consequences; i.e., capacity for Care. To establish why understanding is coupled to Care, it will be helpful to explore what it means to understand something.

To understand something means to engage in a process of acquiring, integrating, and embodying information. Breaking down each of these steps in a bit more detail : (1) Acquisition is the act of taking in or generating new information. (2) Integration involves synthesizing, or differentiating and linking, this new information with what one already knows. (3) Embodiment refers to how this information gets embedded into our existing organizational structure, informing the ways in which we think and behave. What’s important to note about this process is that it ends up changing us in some way. Moreover, the steps in this sequence are fundamentally relational, stemming from our interactions with the world.

While machine intelligence can be quite adept at the first stage of this sequence, owing to the fact that digital computers can accumulate, store, and access information far more efficiently than a human being, it’s in the latter steps that they fall flat in comparison to living minds. This is because integration and embodiment are forms of growth that stem from how minds are interconnected to living bodies. In contrast, existing forms of machine intelligence are fundamentally disembodied, owing to the fact that digital computers are organized around wholly different operating principles than that of living organisms.

For minds that grow out of living systems, interconnections between a body and a mind, and between a body-mind and an environment, is what allows interactions with Reality to be consequential for us. This is an outcome of the fact that our mind’s existence is sustained by the ongoing maintenance of our living bodies, and vice versa. If our living bodies fail, our minds fail. Likewise, if our minds fail, our bodies will soon follow, unless artificially kept alive through external mechanisms. 

Another hallmark of living systems is that they’re capable of producing and maintaining their own parts; in fact, your body replaces about one percent of its cellular components on a daily basis. This is evident in the way that a cut on your finger will heal, and within a few days effectively erase any evidence of its existence. One term for this ability of biological systems to produce and maintain their own parts is autopoiesis (a combination of the ancient Greek words for ‘self’ and ‘creation’). 

The basic principles behind autopoiesis don't just hold true for your skin, but for your brain as well. While the neurons that make up your brain aren’t renewed in the same way that skin or bone cells are, the brain itself has a remarkable degree of plasticity. What plasticity refers to is our brain’s ability to adaptively alter its structure and functioning. And the way that our brains manage to do this is through changes in how bundles of neurons (known as ‘synapses’) are connected to one another. How we end up using our mind has a direct (though not straightforward) influence on the strength of synaptic connections between different regions of our brain; which in turn influences how our mind develops. Accordingly, this is also the reason why the science fiction idea of ‘uploading’ a person’s mind to a computer is pure fantasy, because how a mind functions is inextricably bound with the network of interconnections in which that mind is embodied.

This fundamental circularity between our autopoietic living body and our mind is the foundation of embodied intelligence, which is what allows us to engage with the world through Care. Precisely because autopoietic circularity is so tightly bound with feedback mechanisms that are inherent to Life, it’s proven extraordinarily challenging to create analogues for this process in non-living entities. As such, it’s yet to be demonstrated whether or not autopoietic circularity can be replicated, even in principle, through the system of deterministic rules that governs digital computers. Furthermore, giving machine learning models access to a robotic ‘body’ isn’t enough, on its own, to make these entities truly embodied. This is because embodiment involves far more than having access to and control of a body. Rather, embodiment is a way of encapsulating the rich tapestry of interconnections between an intelligence and the physical processes that grant it access to a world (keeping in mind that everything that your body does, from metabolism to sensory perception, is a type of process).

For the sake of argument, however, let’s assume that the challenges involved in the creation of embodied artificial intelligence are ultimately surmountable. Because embodiment is coupled to a capacity for Care, the creation of embodied artificial intelligence has the potential to open a Pandora’s box of difficult ethical questions that we may not be prepared for (and this is in addition to the disruptive effects that AI is already having on our society). Precisely because Care is grounded in interactions having very real consequences for a being, by extension this also brings with it a possibility for suffering.

For human beings, having adequate access to food, safety, companionship, and opportunities to self actualize aren’t abstractions, nor are they something that we relate to in a disengaged way. Rather, as beings with a capacity for Care, when we’re deprived of what we need from Reality, we end up suffering in real ways. Assuming that the creation of non-living entities with a capacity for Care is even possible, it would behoove us to tread extraordinarily carefully since this could result in beings with a capacity to suffer in ways that we might not be able to fully understand or imagine (since it’s likely that their needs may end up being considerably different than that of a living being).

And of course, there’s the undeniable fact that humanity, as a whole, has had a rather poor track record when it comes to how we respond to those that we don’t understand. For some perspective, it’s only relatively recently that the idea of universal human rights achieved some modicum of acceptance in our emerging global society, and our world still has a long way to go towards the actualization of these professed ideals. By extension, our world’s circle of concern hasn’t expanded to include the suffering of animals in factory farms, let alone to non-living entities that have the potential to be far more alien to us than cows or chickens. Of course, that’s not to imply that ‘humanity’ is a monolith that will respond to AI in just one way. Rather, the ways that beings of this type will be treated will almost certainly be as diverse as the multitude of ways that people treat one another. 

 

Of course, all of this is assuming that the obstacles on the road to embodied artificial intelligence are surmountable, which is far from a given. It could very well be that the creation of non-living entities with a capacity for understanding is beyond what the axioms of what the rules of digital computation allow for. And that apparent progress towards machine understanding is analogous to thinking that one has made tangible progress towards reaching the moon because one has managed to climb halfway up a very tall tree. Yet given the enormity of the stakes involved, it’s a possibility that’s worth taking seriously. For what it’s worth, we’ll be in a much better position to chart a wise course for the challenges that lie ahead if we approach it with a higher degree of self understanding.

Edited by DocWatts

I'm writing a philosophy book! Check it out at : https://7provtruths.org/

Share this post


Link to post
Share on other sites
6 hours ago, DocWatts said:

Rather, understanding is an embodied process that relies on Reality having consequences for the being in question.

Things can be understood in pure abstraction. You can understand 1+1=2 without any consequences.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
On 12/11/2023 at 6:39 AM, ZzzleepingBear said:

I just want to point out that "just as good or better, in any intellectual domain" is a very broad statement. And can even be a misleading statement as well.

When we speak of AI as being capable, we typically lump together intellectual capacity (knowledge) with intellectual freedom. AI is a pattern machine that is limited to whatever has been discovered and acknowledged as important before it's calculations even begins.

Since the AI lacks its own sentience and therefore has no will on its own. The results that it comes up with, will merely result in a wide range of variations of some sort. But AI is not capable of original ideas without major  human intervention. So we will not be able to see any new ideas like those Einstein brought us, coming from an AI. Even though in theory, it should be more than capable with its massive computing power.

An AI can at best, replicate knowledge inorder to get from point A to B and so on, but there is no original source of understanding that any knowledge of an AI rests in. It's main source of "understanding" comes from specific targeted goals.

Some may think this is a negative assessment of AI but my main point is not about hating on AI. My main point is to adress common assumptions about AI that will stubbornly cling as truths before being looked at from a more in depth philosophical point of view. Even alot of people in the tech space view their own sense of thinking as that of a robot/machine, and while they may be the leading experts in their field. Something to keep in mind here, is that they may not be leading philosophers at the end of the day. 

So while AI may bring in alot of new results, it may not necessarily bring about alot of new understanding.

Can it really be considered intelligent at that point? So far AI seems to be mostly just a tool for aggregating data more efficiently.

Do you guys think we are in a Gartner hype cycle, specifically towards "peak of inflated expectations"? So far AI seems to be an effective tool for certain tasks but nothing completely revolutionary. Granted it is still developing.

edrtfyguhij.PNG

Edited by Basman

Share this post


Link to post
Share on other sites
3 hours ago, Leo Gura said:

Things can be understood in pure abstraction. You can understand 1+1=2 without any consequences.

I'd counter that things can only be 'mostly' abstract for us.

I'm doubtful as to whether 'pure' abstraction can actually exist, because explicit knowledge of anything (including abstract concepts such as arithmetic) is grounded in a mountain of tacit knowledge that we pick up from having consequential interactions with the world around us.

The only reason that 1+1=2 is meaningful for us is because we're able to manipulate things in Reality, which are sometimes encountered alone and which are sometimes encountered in pairs. 

Edited by DocWatts

I'm writing a philosophy book! Check it out at : https://7provtruths.org/

Share this post


Link to post
Share on other sites
On 11/12/2023 at 8:20 PM, thenondualtankie said:

I've had similar thoughts too (AI will never be able to create something actually *new*)

Well What we call *new* is just permutations and combinations of the old , GPT-4 can create poems because it has access to large data of text based information on the internet . Similarly for AI to create something new in any scientific sense , it would have to have access to  physical information , which is at least not possible now. 
One other thing which GPT is good at is producing code , and it can produce new code from boiler plate and bits of pieces of code  already on the internet. So I think AI can discover new things , its just that it will take time , it could be in a few years, but also could be on a scale of decades

Share this post


Link to post
Share on other sites

The reality is what we call "AI/LLM" is just a tool built around a massive spreadsheet and graph, not an "entity." It's closer to a parlor trick just mimicking human knowledge, despite how convincing it is. That said it's a fucking amazing tool and parlor trick or not it's revolutionizing how we interface with computers. No code programming is a vast improvement on traditional coding and nobody in their right mind would spend the years learning BS jargon if it weren't necessary to execute the goal of programming which is taking thoughts and translating them onto the digital world.

When you compare the "compute / watt" the human brain is about 10 watts of power compared to the hundreds of thousands of watts of power required to power these mega-models. We are a long ways away at the moment from being anything close to human level intelligence (generally speaking, less so on specialized tasks) 

Ray Kurzweil predicted 2029 for what we call "AGI" and that seems about right. I don't think the AGI will be anything that different from the current iteration of chat GPT in principal but it's capabilities are going to vastly dwarf humans. The thing I find most interesting about what makes computers "smart" is the ability to exist outside of spacetime. 

They can gain experience non-linearly and achieve experience laterally thousands of years of experience (albeit through "brute force.") Using massively expensive and power-hungry GPU clusters they are not bound to our spacetime limited "one minute at a time" existence. So even if computers have the intelligence of a toddler they have the added benefit of being able to gain infinite experience if you have enough $ to throw at it.

It will be a super tool and the real danger is what asshole is going to use that supertool to try and destroy humanity...

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now