waheed

An insight about LLMs.

8 posts in this topic

I usually get insights when i'm not completely awake in morning or when I'm not completely sleeping at night.

Most recent one is about LLMs, basically it became clear like a day to me that while relying on human generated data LLMs can never surpass smartest human in intelligence. Even if they gather a planet size of compute. Basically intelligence is ingrained in abstractions and all the data humanity created, including all knowledge. A higher intelligence will have very different "content", for the lack of better word.

There is another corollary to this which shows that LLMs are way inferior then we think.

By the way i'm using LLMs every day in my work and they helped me alot 

Edited by waheed

Share this post


Link to post
Share on other sites

Yes they are basically lying about them. How can you find a cure for something if you dont feed it to the LLM. The LLM dosent think.

Share this post


Link to post
Share on other sites
20 hours ago, Hojo said:

How can you find a cure for something if you dont feed it to the LLM.

It could be that all the steps for a cure are already inputted in the training data of an LLM, it just needs enough compute power to make the necessary connections.

Share this post


Link to post
Share on other sites
1 hour ago, cistanche_enjoyer said:

It could be that all the steps for a cure are already inputted in the training data of an LLM, it just needs enough compute power to make the necessary connections.

This is an extended version of low hanging fruit logic, and it's possible that there are some low hanging fruits.

Share this post


Link to post
Share on other sites

I would be careful saying LLMs can't think. They can find solutions to problems by analyzing data and connecting ideas between them. It's not so different from how a human brain works. 


Owner of creatives community all around Canada as well as a business & Investing mastermind 

Follow me on Instagram @Kylegfall 

 

Share this post


Link to post
Share on other sites
13 hours ago, LordFall said:

I would be careful saying LLMs can't think. They can find solutions to problems by analyzing data and connecting ideas between them. It's not so different from how a human brain works. 

what is thinking anyway, if you've a clear definition, you're not self conscious.


𝔉𝔞𝔠𝔢𝔱 𝔣𝔯𝔬𝔪 𝔱𝔥𝔢 𝔡𝔯𝔢𝔞𝔪 𝔬𝔣 𝔤𝔬𝔡
Eternal Art - World Creator
https://x.com/VahnAeris

Share this post


Link to post
Share on other sites
On 1/14/2026 at 0:25 AM, LordFall said:

I would be careful saying LLMs can't think. They can find solutions to problems by analyzing data and connecting ideas between them. It's not so different from how a human brain works. 

LLMs can slove problems and work with language. However since they rely on language and it's inherent structure, they won't go beyond it. 

But intelligence is not just language and it's inherent structure, human intelligence is far more complex then language, it's expressed in language, when there is a need of it, I'll argue that knowing and understanding is prior to language.

Additionally, any higher intelligence will have a different language or maybe something totally different which will be enabled by their intelligence level that we cant imagine. Can a bird imagine anything about what humans created, i think not, just an assumption :) . 

Similarly a higher intelligence will have a different language (for the lack of better word), different concepts and different relationships among concepts that will enable them to express or explore reality at a very different level then us. Though i'm using these words, but we don't know what will be enabled by their intelligence level. As of now we don't know why humans evolved language, in evolutionary scale its a recent thing.

Super intelligence is just a myth tech bros are selling us. There is more to this but i won't bore you further.

On 1/14/2026 at 1:58 PM, AerisVahnEphelia said:

what is thinking anyway, if you've a clear definition, you're not self conscious.

Thinking, understanding and even intelligence is attributed to LLMs primarily to increase appeal for financial gains. Since reality of these things won't be so interesting.

Share this post


Link to post
Share on other sites

Intelligence is emergent. If AI mapped out all known problem structures and all known solution patterns, then I can see how it would be able to solve novel problems if it were trained on those structures. And I think that’s actually what they’re working on behind the scenes. They’re not just training them on knowledge and data. They’re training them on problem and solution structures, which are finite. Can you imagine a machine that could instantly identify the problem and instantly know the path to the solution? That day I will come. 

And when it does, the AI would be more creative than the human, as it will understand the structure of creativity and it will know what is good, bad, useful, and useless creativity. 

Edited by Joshe

"It is of no avail to fret and fume and chafe at the chains which bind you; you must know why and how you are bound. " - James Allen 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now