LordFall

The AI crash is impossible - Change my View

198 posts in this topic

4 hours ago, LordFall said:

So what is your definition of intelligence? 

To perceive and act in alignment with truth.


"Finding your reason can be so deceiving, a subliminal place. 

I will not break, 'cause I've been riding the curves of these infinity words and so I'll be on my way. I will not stay.

 And it goes On and On, On and On"

Share this post


Link to post
Share on other sites
3 hours ago, aurum said:

To perceive and act in alignment with truth.

Given that notion - is the motivation for the claim that higher level intelligence includes better character and more care is something like:

To be intelligent is to care about and to recognize fundamental truths about reality - It is metaphysically true, that reality is fundamentally love and that everything is ultimately one and  being aligned with that truth means to recognize that fact and to live aligned with that. 

So It is basically a completely different way/mode of being, where you process and filter information differently and if you are not in that kind of mode of being , you dont have access to /  cant recognize certain truths.

Edited by zurew

Share this post


Link to post
Share on other sites
On 3/5/2026 at 8:37 PM, Lyubov said:

All those layoffs are speculation and because we are in a quiet recession on par with 2008. It’s not because AI has actually replaced anyone. 

AI hype is very convenient when it comes to hypewashing real economic downturns. AI investments itself are being used to hedge against a recession and is single-handedly propping up the US economy in pure investment dollars. It's probably why Trump wants to bail out AI companies when they do inevitably crash, to avoid associating his name with bad times.

Share this post


Link to post
Share on other sites
10 hours ago, aurum said:

To perceive and act in alignment with truth.

Then all current AI models fit your definition. You give it a true statement or a basis of information and it can perceive that information and act in alignment to it.

@Scholar Yes those are good points to bring up. I think it's really important to separate consciousness from intelligence because it lies at the crux of the issue and why it should be more interesting to have this conversation on this forum than elsewhere. 

An ant or a bee is conscious but they are not considered intelligence. Sentience is another word to throw in the mix here to make consciousness more nuanced as with the work we do here we aim to increase consciousness and thus separate ourselves from other sentient human beings that exhibit low consciousness.

I think LLMs are already capable of intelligent frameworks and train of thoughts than any human alive today. They are not sentient though and thus need to be infused with our own human consciousness to come up with these intelligent frameworks. I would then consider them a hybrid life form at this point.

So we need to fame this AGI debate in terms of sentience because as me and Joshe pointed out, it doesn't need to cross that threshold to completely revolutionize our civilization. It just needs to start to have widespread commercial application. Which are inevitable in my view.

And even in the view that some of you seem to take that machines cannot fundamentally develop sentience, I would call this a fundamentally foolish take. Especially since we know that sentience is barely understood and if we believe in non-duality then doesn't even come from organic matter it comes from consciousness itself so why wouldn't consciousness make a limited organic lifeform that easily dies that ends up combining the other forms of reality to create a more resilient lifeform based on computing? Is that really such a hard paradigm to believe in? Seems pretty logical if not inevitable to me. 

 

Image 2026-03-09 at 9.53 AM.jpg


Owner of creatives community all around Canada as well as a business & Investing mastermind 

Follow me on Instagram @Kylegfall 

 

Share this post


Link to post
Share on other sites
1 hour ago, LordFall said:

Then all current AI models fit your definition.

No, not at all.

The LLM is at its core just a prediction algorithm based on its training data. It cannot perceive, self-reflect, reason, generalize principles, generate deeply novel insight, update itself or act informally. It does not even know it exists.

This is why the jagged intelligence phenomena happens. LLMs are just brute force compute. They are not a true intelligence.


"Finding your reason can be so deceiving, a subliminal place. 

I will not break, 'cause I've been riding the curves of these infinity words and so I'll be on my way. I will not stay.

 And it goes On and On, On and On"

Share this post


Link to post
Share on other sites
8 hours ago, zurew said:

To be intelligent is to care about and to recognize fundamental truths about reality - It is metaphysically true, that reality is fundamentally love and that everything is ultimately one and  being aligned with that truth means to recognize that fact and to live aligned with that. 

Yes that would be a good start. Although labeling intelligence as a binary, i.e true intelligence or not true intelligence, can be a bit misleading.

Really what you have is a spectrum of intelligence. 

Saying “true intelligence” is mostly short-hand for a relatively high intelligence that is more conscious of its unity.

Low levels of intelligence still exist, but will be less conscious of its unity.


"Finding your reason can be so deceiving, a subliminal place. 

I will not break, 'cause I've been riding the curves of these infinity words and so I'll be on my way. I will not stay.

 And it goes On and On, On and On"

Share this post


Link to post
Share on other sites
38 minutes ago, aurum said:

The LLM is at its core just a prediction algorithm based on its training data. It cannot perceive, self-reflect, reason, generalize principles, generate deeply novel insight, update itself or act informally. It does not even know it exists.

I think DeepMind systems go further than text based prediction by combining neural networks with reinforcement learning and planning algorithms, allowing them to evaluate actions and discover new strategies.

None of these systems are conscious, but as far as I understand they are more than simple prediction engines. I don't just mean chatbots or agents.

Share this post


Link to post
Share on other sites
8 hours ago, bazera said:

I think DeepMind systems go further than text based prediction by combining neural networks with reinforcement learning and planning algorithms, allowing them to evaluate actions and discover new strategies.

None of these systems are conscious, but as far as I understand they are more than simple prediction engines. I don't just mean chatbots or agents.

DeepMind is somewhat different than strict LLMs, but it is essentially still just a prediction engine. Reinforcement learning is based on reward functions that allow it to predict the probability of the best move.


"Finding your reason can be so deceiving, a subliminal place. 

I will not break, 'cause I've been riding the curves of these infinity words and so I'll be on my way. I will not stay.

 And it goes On and On, On and On"

Share this post


Link to post
Share on other sites
32 minutes ago, aurum said:

DeepMind is somewhat different than strict LLMs, but it is essentially still just a prediction engine. Reinforcement learning is based on reward functions that allow it to predict the probability of the best move.

Reminds me of the limbic system and its dopaminergic pathways that is at the base of most living organisms. It managed to go from a single cell to multi cellular organisms to simple creatures to us now. I don't see why AI can't similarly evolve in consciousness. 


Owner of creatives community all around Canada as well as a business & Investing mastermind 

Follow me on Instagram @Kylegfall 

 

Share this post


Link to post
Share on other sites
35 minutes ago, LordFall said:

I don't see why AI can't similarly evolve in consciousness. 

The argument has not been that it "can't" eventually evolve to AGI.

The argument has been that tech CEOs are selling a fantasy on how soon it's going to happen and what the ramifications are for society.

They are selling this fantasy because of some combination of genuinely not understanding how intelligence works + financial incentive + silicon valley group think.

I personally think we will eventually get AGI. 


"Finding your reason can be so deceiving, a subliminal place. 

I will not break, 'cause I've been riding the curves of these infinity words and so I'll be on my way. I will not stay.

 And it goes On and On, On and On"

Share this post


Link to post
Share on other sites
9 hours ago, LordFall said:

Reminds me of the limbic system and its dopaminergic pathways that is at the base of most living organisms. It managed to go from a single cell to multi cellular organisms to simple creatures to us now. I don't see why AI can't similarly evolve in consciousness. 

A brain is not just a neural net, right now they are training on one type of architecture that cant evolve. The architecture it self has to evolve with the training and they havent figured out a way to do it. 


How is this post just me acting out my ego in the usual ways? Is this post just me venting and justifying my selfishness? Are the things you are posting in alignment with principles of higher consciousness and higher stages of ego development? Are you acting in a mature or immature way? Are you being selfish or selfless in your communication? Are you acting like a monkey or like a God-like being?

Share this post


Link to post
Share on other sites
4 hours ago, bazera said:

"I was a 10x engineer. Now I'm useless" => https://x.com/atmoio/status/2030289138126107074?s=46&t=efdKEjqLRXtLo5-P5HnZFQ

I relate to this.

@Leo Gura 10x engineer that only uses AI now... 


How is this post just me acting out my ego in the usual ways? Is this post just me venting and justifying my selfishness? Are the things you are posting in alignment with principles of higher consciousness and higher stages of ego development? Are you acting in a mature or immature way? Are you being selfish or selfless in your communication? Are you acting like a monkey or like a God-like being?

Share this post


Link to post
Share on other sites
On 09/03/2026 at 4:53 PM, LordFall said:

Then all current AI models fit your definition. You give it a true statement or a basis of information and it can perceive that information and act in alignment to it.

@Scholar Yes those are good points to bring up. I think it's really important to separate consciousness from intelligence because it lies at the crux of the issue and why it should be more interesting to have this conversation on this forum than elsewhere. 

An ant or a bee is conscious but they are not considered intelligence. Sentience is another word to throw in the mix here to make consciousness more nuanced as with the work we do here we aim to increase consciousness and thus separate ourselves from other sentient human beings that exhibit low consciousness.

I think LLMs are already capable of intelligent frameworks and train of thoughts than any human alive today. They are not sentient though and thus need to be infused with our own human consciousness to come up with these intelligent frameworks. I would then consider them a hybrid life form at this point.

So we need to fame this AGI debate in terms of sentience because as me and Joshe pointed out, it doesn't need to cross that threshold to completely revolutionize our civilization. It just needs to start to have widespread commercial application. Which are inevitable in my view.

And even in the view that some of you seem to take that machines cannot fundamentally develop sentience, I would call this a fundamentally foolish take. Especially since we know that sentience is barely understood and if we believe in non-duality then doesn't even come from organic matter it comes from consciousness itself so why wouldn't consciousness make a limited organic lifeform that easily dies that ends up combining the other forms of reality to create a more resilient lifeform based on computing? Is that really such a hard paradigm to believe in? Seems pretty logical if not inevitable to me. 

 

Image 2026-03-09 at 9.53 AM.jpg

LLMs specifically are good at creating probabilistic patterns. If there was an average IQ human being on this planet who had the amount of information and patterns internalized in his mind as LLMs do, he would likely revolutionize the world within days, create technologies and scientific insights that would transform everything.

You have to ask yourself why LLMs are not capable of doing this, and why human beings, using LLMs, also are not able to do this to the degree a human being could who would have access to such a vast amount of knowledge and pattern recognition.

 

You are just assuming that consciousness is not relevant for intelligence, because LLMs are generating text that looks like human thought as a result of having been trained to imitate such symbolic arrangements. 

Again, if there was a single human being who was eloquent, as vastly informed and trained as LLMs are, you could ask him to solve a vast array of unknown scientific problems and he would struggle little to do so even if he was not Einstein level of intelligence. Yet, LLMs fail at basic reasoning despite being so well trained, just debate them on any issue you are an expert on, that is not well-explored in mainstream literature, and it will be obvious to you.

Why is this the case if these systems are so intelligent?

Share this post


Link to post
Share on other sites
5 hours ago, integral said:

@Leo Gura 10x engineer that only uses AI now... 

10x engineer.

Hold my 10x dick while I laugh.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

That 10x thing is a clickbait, but it is a fact that computer programming field is being changed pretty rapidly that affect daily job of developers so much that it causes existential questioning and meaning crisis, and it will continue to do so going forwards. 

No other field is being changed this much due to LLMs and agents. This was the easiest to affect due to the nature of a job and its relation to what an LLM does.

I'm not saying it will cause insane productivity gains or it will replace anyone. I'm trying to see what's actually happening.

Share this post


Link to post
Share on other sites

Show me one serious software made by AI.

I am calling all these tech bro bluffs. Show me reciepts or pipe down about AI.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

@Leo Gura It's not that AI does software for you, it's that you use AI to assist you in building software more rapidly than you could solo. And that causes distress in many developers because before agentic coding assistence you had a feeling that you owned what you wrote because it was all generated by you, it was all done through struggle and hard-earned experience.

Now even if 30% (it's much more then that) of the code is assisted by LLM model, you end up feeling emotionally detached from the work you do, and it's different in nature from the way you worked before.

Maybe it just needs getting used to.

There aren't any serious software thats made with AI, but all the software today is being made with AI's assistence. 

Even if you ask ChatGPT to help you with fixing error, thats AI assistence, it doesnt mean that AI does the software.

Edited by bazera

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now