LordFall

The AI crash is impossible - Change my View

181 posts in this topic

I like Bashar's thinking here, but I'd push back in a couple ways.

1) Yes, true intelligence operates on whole systems thinking. This is correct. But it would be a mistake to assume whole systems thinking = totally harmless. A true intelligence could still make decisions that cause tradeoffs within the system.

2) He seems to think that AI will be more intelligent than humans, and we will have to catch up to its level. I suspect it's the other way around. Humanity will align with greater intelligence, and then we may create intelligence. Greater intelligence coming first feels backwards.

3) We are not close to building the kind of AI he is describing


"Finding your reason can be so deceiving, a subliminal place. 

I will not break, 'cause I've been riding the curves of these infinity words and so I'll be on my way. I will not stay.

 And it goes On and On, On and On"

Share this post


Link to post
Share on other sites

@aurum Good points.

One additional point: There will be intermediate stages of intelligence before we reach that holistic, benevolent, ultimate intelligence. That intermediate intelligence could be unholistic and corrupt, especially since it was made by corrupt greedy corporate hacks.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

One great form of intelligence that even AI systems today have is the ability to explain their thought process. You can give it the task of analyzing a complex geopolitical situation and ask it to explain how it came to its conclusion and it'll make way more sense than most humans today including our leading experts in a whole lot of fields. 

According to merriam-webster intelligence is defined as such: 

1) a: the ability to learn or understand things or to deal with new or difficult situations 

b: mental acuteness : cleverness

c: comprehension, knowledge

d: Christian Science : the basic eternal quality of divine Mind

I would say already today it would do better than the average humans in a, b, c and arguably potentially also d. 

What precise definition of intelligence are you guys using to claim it falls short of humans even today let alone 5 years from now? 

Image 2026-03-06 at 2.16 PM.jpg


Owner of creatives community all around Canada as well as a business & Investing mastermind 

Follow me on Instagram @Kylegfall 

 

Share this post


Link to post
Share on other sites
15 minutes ago, Leo Gura said:

@aurum Good points.

One additional point: There will be intermediate stages of intelligence before we reach that holistic, benevolent, ultimate intelligence. That intermediate intelligence could be unholistic and corrupt, especially since it was made by corrupt greedy corporate hacks.

That's another thing I feel sometimes gets missed in this AGI discussion.

People conflate AGI with human-like or even god-like intelligence. But you could make an AGI that is still relatively dumb compared to humans. That would probably be the starting point.

So even if the tech bros create AGI in the next couple years like they're betting on, they're also betting on that this AGI will be of at least human-level intelligence. That's an additional bet.


"Finding your reason can be so deceiving, a subliminal place. 

I will not break, 'cause I've been riding the curves of these infinity words and so I'll be on my way. I will not stay.

 And it goes On and On, On and On"

Share this post


Link to post
Share on other sites
On 4-3-2026 at 10:10 PM, Leo Gura said:

I am turning into old-man-yells-at-cloud.

I told you to make an AI episode! 

Share this post


Link to post
Share on other sites
51 minutes ago, Butters said:

I told you to make an AI episode! 

What if the episode was made by AI itself?

Edited by UnbornTao

Share this post


Link to post
Share on other sites
1 hour ago, UnbornTao said:

What if the episode was made by AI itself?

What if all recent blog posts were just chatgpt 😳

Share this post


Link to post
Share on other sites
3 minutes ago, Butters said:

What if all recent blog posts were just chatgpt 😳

🤖

Share this post


Link to post
Share on other sites
On 2026. 03. 06. at 9:47 PM, aurum said:

A true intelligence could still make decisions that cause tradeoffs within the system.

I dont know what your notion of "true intelligence" is , but regardless what that notion is, if we create a different notion like "the ability to solve problems" - then I dont see whats the argument for the implicit claim that AI cant  become a better problem solver than humans while also having an evil character or a sociopathic character.

Share this post


Link to post
Share on other sites

I also have issues with the described holistic thinking idea where self presevation somehow needs to be implcit.

I dont see why that would be the case.

Why couldnt there be an AI with holistic thinking that doesnt care about self-presevation at all ? Or that it only cares about self-preservation to some degree ,but there are things that it cares about more  ( that can make the AGI to self-delete and with that destroy a bunch of other things as well) or it only wants to self-preserve for a random finite amount of time.

 

But even if  we assume that holistic thinking somehow necessarily includes the value of self-preservation. Even in that context, there are a bunch of nuanced scenarios where you can perfectly destroy a lot  of things without destroying yourself.

Edited by zurew

Share this post


Link to post
Share on other sites
12 minutes ago, zurew said:

also having an evil character or a sociopathic character.

It won't have that character unless it is programmed to have it. The AI doesn't want anything.


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
Just now, Leo Gura said:

The AI doesn't want anything.

Not yet, and it might be the case that LLM-s will never have a character, but I dont see why we would assume that non-LLM based AGI wont ever have a character or that why couldnt it ever overwrite the built in character. 

Share this post


Link to post
Share on other sites
21 minutes ago, zurew said:

But even if  we assume that holistic thinking somehow necessarily includes the value of self-preservation. Even in that context, there are a bunch of nuanced scenarios where you can perfectly destroy a lot  of things without destroying yourself.

And as an additional point , you can care about the whole while also not care just as much about the parts.

There can be scenarios where replacing or destroying parts can be beneficial for the greater whole (easy example - destroying cancer to preserve your body).

Edited by zurew

Share this post


Link to post
Share on other sites
39 minutes ago, zurew said:

if we create a different notion like "the ability to solve problems"

Intelligence should not be reduced to problem-solving ability.

If we create a reductionistic definition for intelligence, of course our conclusions about a super-intelligent AI will be wrong.


"Finding your reason can be so deceiving, a subliminal place. 

I will not break, 'cause I've been riding the curves of these infinity words and so I'll be on my way. I will not stay.

 And it goes On and On, On and On"

Share this post


Link to post
Share on other sites
7 minutes ago, aurum said:

Intelligence should not be reduced to problem-solving ability.

Thats not the point.

You are bringing in normativity but thats not relevant in this specific case.

We can label "problem solving" in any way we like, we dont need to put the label "intelligence" on it if we dont want to.

But we can descriptively engage with that concept and make inferences about it and think about it.

 

The issue here is that if it is the case that better problem solving can be achieved without the given AI developing any kind of good moral character or care about whole systems, then we might have an issue here (if the developers of AI only care about making better problem solvers)

Edited by zurew

Share this post


Link to post
Share on other sites
40 minutes ago, zurew said:

I dont see why we would assume that non-LLM based AGI wont ever have a character

As soon as you switch to AGI you are talking about a different beast.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
3 hours ago, zurew said:

The issue here is that if it is the case that better problem solving can be achieved without the given AI developing any kind of good moral character or care about whole systems

Maybe.

But then I'd say it shouldn't be considered intelligence. 


"Finding your reason can be so deceiving, a subliminal place. 

I will not break, 'cause I've been riding the curves of these infinity words and so I'll be on my way. I will not stay.

 And it goes On and On, On and On"

Share this post


Link to post
Share on other sites
4 hours ago, aurum said:

Intelligence should not be reduced to problem-solving ability.

If we create a reductionistic definition for intelligence, of course our conclusions about a super-intelligent AI will be wrong.

So what is your definition of intelligence? 


Owner of creatives community all around Canada as well as a business & Investing mastermind 

Follow me on Instagram @Kylegfall 

 

Share this post


Link to post
Share on other sites
1 hour ago, LordFall said:

So what is your definition of intelligence? 

 Intelligence in the way human beings refer to it, for the most part, actually boils down to awareness.

Awareness necessarily is qualia, you cannot be aware of redness without the experience of redness, because awareness is a form of existence.

The existence, the being, of redness is a prerequisite of the awareness of it.

 

What you see in AI right now is specifically non-intelligence, in the sense that people usually refer to it (unknowingly). It is processes of the mind that are unaware. AI is purely intuitive, which just means it is unconscious. AI does what your brain does when it creates a dreamscape.

You can even think of thinking as an intuitive, non-conscious process. Many people assume intelligence is rooted in thinking, but thinking is mostly a result of non-aware, unintelligent/unaware "processing". What we mean by intelligence is a combination of the thinking and awareness applied to it. Awareness is what then steers and informs the subconscious processes of the brain, as it reflects, recognizes and provides feedback.

 

You basically have several neural networks in your brain, all connected to each other. They provide functional intuition and so forth. Many people think that is all there is to intelligence, because these neural networks are what provide "functionality" in terms of problem solving.

However, what makes us "intelligent" in a truer sense is the fact that all of these neural networks feed into a unified field of perception. There is activity beyond mere neural activity. Colors, sound, objecthood, relations, concept etc

Basically, what you call consciousness. And that consciousness, in the narrow sense of the word, is shaped in part by the neural activity in your brain. However, this consciousness provides new, fundamentally inaccessible functionality (as it is not function, but other forms of being) that then feeds back into the neural networks of the brain.

 

So, when you look at neural networks and what they can do, it is all the things the brain can do without consciousness, without awareness. To the LLM, there is no essence to any of the symbols it creates when constructing sentences that look like human speech. What the AI creates, in terms of imitating human speech, only is insofar meaningful as it is fed into a consciousness.

 

What exactly a brain or neural network can do without consciousness/awareness is hard to determine. I wager there is a lot of functionality that can be created purely through a sort of intuitive, unconscious processing.

Yet, it also seems to be the case that people generally underestimate the signification impact awareness/consciousness has.

 

There is an important reason why you might actually not be able to engineer yourself to consciousness through the current technology we have.

The way consciousness is individuated in this universe is particular, if we adopt a dualistic framework, to the physical arrangement or relationships between atoms/wavefunctions. Evolution occurs in physical reality, it explores various physical arrangements that then relate to other forms of existence (what you refer to as consciousness). Meaning, given the profound functionality awareness/consciousness provides to an organism, physical structures, through a process of random natural selection, will arrange themselves to give rise to individuated consciousness.

Computer simulations are fundamentally limited in that the physical processes of computer processing remains the same. When neural networks are evolved in a simulated way to provide emergent functionality, purely "neuronal network" functionality. Consciousness in this sense will never evolve in these networks, because that would require the substrate, the hardware on which the neural networks run, to arrange themselves in such a way that relates to consciousness in the same way, or similar way, the brain does.

In other words, if you want to create true "AGI", an AGI that is aware, you would need to go beyond simulated evolution, and participate in a physical form of controlled evolution, or specifically determine how the the physical arrangements in the brain relate to consciousness, and then replicate that in a controlled physical medium.

If you could create a perfect physical simulation (mathematically, physically speaking) of the brain in a contemporary computer system, my claim is that simulated brain would not behave the same way the real-world brain does. In fact, the brain would be non-functional as it's neural arrangements rely on/are adapted to the field of perception that arises as a result of the physical arrangements themselves, in a feedback loop. You would basically only have one half of the feedback loop, if you simulated every single particle in the human brain.

 

There are a two assumptions made by AGI optimists that are not questioned deeply enough:

1. Consciousness itself is not integral to what we consider complete intelligence and provides no unique functionality that cannot emerge from functional complexity.

2. Consciousness can emerge from pure functional complexity.

 

Edited by Scholar

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now