Leo Gura

Leo's Blog Discussion Mega-Thread

6,436 posts in this topic

48 minutes ago, Nemra said:

@Leo Gura, I forgot to mention that the forum's posting section isn't mobile-friendly. The text formatting options are limited, with missing formatting buttons compared to when using a desktop computer.

The trick that I use sometimes is that if you set phone orientation on landscape, you'll get desktop version formatting options. I use what I want and then switch back to portrait mode.

Takes extra time but better then not having options at all.

Share this post


Link to post
Share on other sites

@bazera Yes, because it's a screen real-estate issue.


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
15 minutes ago, Leo Gura said:

That is by design. Mobile screen size is too small to have many option buttons.

Ok, but don't you think that you could add a button to show more options or do you deem it unnecessary for such a small inconvenience?

@Leo Gura, also, have you thought about redesigning your website (not the forum)?

No offense, but it's rather dull and old-looking.

Edited by Nemra

Share this post


Link to post
Share on other sites
14 minutes ago, bazera said:

The trick that I use sometimes is that if you set phone orientation on landscape, you'll get desktop version formatting options. I use what I want and then switch back to portrait mode.

Takes extra time but better then not having options at all.

Thanks for the info.

I always turned on the "Desktop site" option in Chrome to use other formatting options.

Edited by Nemra

Share this post


Link to post
Share on other sites
3 hours ago, Human Mint said:

https://actualized.org/insights/intelligence-is-unity

This hits so hard. I don't know how and why, but it resonates to the bones. It explains every misery I experience in my life. Every time I suffer is because I am wishing for things to be connected. Being defensive is the surest way to be divided because you are interrupting the flow of Intelligence.

Every area of your life you're is trying to upgrade you're essentially training your ability to maintain or to produce a constant flow of intelligence where there was none from YOUR end. So you're basically a server that needs constant tech support for repairing and making new connections.

And to connect it to the ugliness discussion: ugliness is division.

Edited by Yimpa

Beauty is all around Infinity

Share this post


Link to post
Share on other sites

 

This content has some flaws. They show a study one year ago about coding.

ONE YEAR. That's very old. Nowadays developers code with agents and studies show up to 20-50% improvement.

You get a wrong picture from this docu. There is no improvement with data? Fine there are so many other ways to improve ai .they don't mention that even.

Share this post


Link to post
Share on other sites
2 hours ago, OBEler said:

This content has some flaws. They show a study one year ago about coding.

"Some" doesn't quite capture it. Almost everything it presents is sloppy as hell and misleading. 

Their main evidence that programming productivity is a mirage is that they tracked 16 SWEs about a year ago who did worse with agentic coding. They present weak, incredibly low sample size, non peer-reviewed studies as meaningful science. You shouldn't let a source like this sway you on anything.

And of course brain activity decreases when using tools. That's the whole point of tools. lol. Historically, tools tend to expand cognition, not collapse it. 

Also, the video serves as a perfect example of doing exactly what it warns about: presenting information confidently enough that people assume it's correct via authority signals (studies). 

It's worth noting the video title is "LLMs can't reason - AGI impossible" while only 20% of the video is about that. The other 80% is ai-bad for humans. It has a propaganda feel to it. 

Edited by Joshe

What if this is just fascination + identity + seriousness being inflated into universal importance?

Share this post


Link to post
Share on other sites

@Joshe yes this content is overall pretty low quality, outdated studies. 

 The only thing it did right to mention that you get less critical and don't double check because of trust and laziness.

Share this post


Link to post
Share on other sites

I asked Claude a question like this 

Can deep epistemic rigour save the world? ( based on the recent post about corruption and caring about Truth. 

The answer 

What a question. Let me take it seriously.

The case that it can:

Deep epistemic rigour — careful attention to how we form beliefs, what evidence we demand, how we reason under uncertainty — does seem load-bearing for almost every civilizational challenge. Climate policy fails partly because motivated reasoning and tribal epistemics let people discount overwhelming evidence. AI safety is hard partly because researchers don't rigorously interrogate their own assumptions. Pandemic response failed partly because institutions couldn't update quickly on new data. In each case, better thinking seems like it would help.

There's also a compounding argument: the people and institutions that shape the world disproportionately — scientists, policymakers, judges, engineers — are exactly the people whose epistemic habits matter most. Improving those habits at the margin could have outsized effects.

The case that it can't:

Rigour is a tool, not a motivation. The most epistemically careful person can still pursue narrow or destructive ends. And the world's most catastrophic actors aren't usually making reasoning errors — they're often quite clear-eyed about what they want and how to get it. The problem is values, not epistemics.

There's also a coordination problem. Even if every individual reasoned perfectly, collective action problems, power asymmetries, and structural incentives can produce terrible outcomes from perfectly rational actors. Game theory is full of tragedies that require no irrationality at all.

And then there's the elitism trap: "if everyone just reasoned better" can become a way of blaming ordinary people for the failures of systems designed to exploit cognitive shortcuts. The demand for rigour can itself become a rhetorical weapon — used to dismiss intuitions that are actually tracking something real but hard to formalize.

A more honest synthesis:

Epistemic rigour is probably necessary but very far from sufficient. It matters most in specific, high-leverage contexts: how scientific institutions self-correct, how policy incorporates uncertainty, how AI systems are evaluated. In those domains, the difference between sloppy and rigorous thinking is enormous.

But "saving the world" also requires things rigour can't supply on its own — moral seriousness, political will, solidarity, luck, and coordination mechanisms that make cooperation rational even for self-interested actors.

Perhaps the best version of the claim isn't that rigour saves the world directly, but that epistemic cowardice — the willingness to believe comfortable things, to avoid updating, to confuse fluency with understanding — is one of the most reliable engines of catastrophe. Rigour, in that sense, is less a solution than a discipline against self-destruction.

So: probably not sufficient. Possibly necessary. Definitely underrated.

Edited by Rafael Thundercat

Share this post


Link to post
Share on other sites
4 minutes ago, Rafael Thundercat said:

save the world?

I don't even know what that means. The world does not need saving.

But it will definitely evolve mankind to higher ways of living.

We are talking about how to get mankind away from living like animals. Sex trafficking minors is animal behavior.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

 

Just now, Leo Gura said:

I don't even know what that means. The world does not need saving.

But it will definitely evolve mankind to higher ways of living.

Yes I think asking save the world was asking too much, I just was trying to see how it would tackle the question. Of course AI can even grasp what sort of Truth I was refering to. 

Share this post


Link to post
Share on other sites
50 minutes ago, OBEler said:

The only thing it did right to mention that you get less critical and don't double check because of trust and laziness.

It's even wrong there IMO. Intellectual rigor is like a cognitive habit that remains relatively stable. Lazy thinkers of yesterday will usually be lazy thinkers tomorrow. People who used to search Google to find the first easiest answer are using AI just like that. 

AI doesn't create laziness - it just reveals the level of intellectual rigor that was already there. Serious thinkers still verify, but now they can do it faster and better. 

Also, people have always stopped double-checking once they feel confident in a source. Books, teachers, Google, Stack Overflow. Most people were already doing the minimum verification they felt necessary with Google.

The verification process usually includes a ton of logistical busywork - not higher-order cognition. AI eliminates the bulk of that busywork, freeing serious thinkers up for higher reasoning.

Evaluation is a higher form of cognition than logistical busywork. With AI, people will evaluate MORE, not less, because AI reduces the logistical friction. Even unserious thinkers will increase in higher level reasoning as a result of AI. The net effect on cognition will be positive, IMO. 

"Tools change the speed of thinking loops, but intellectual rigor comes from the mind running the loop. Reducing friction around information tends to increase the number of reasoning cycles people run. When the number of cycles increase, even people who are not highly rigorous will engage in more higher-level reasoning than they previously did."

Someone who previously reasoned through 10 questions per week might now reason through 100.

w7IeAn5.png

Edited by Joshe

What if this is just fascination + identity + seriousness being inflated into universal importance?

Share this post


Link to post
Share on other sites

@Joshe AI can be used responsibly, but mostly it won't be, because the people using it never cared about truth in the first place.

Epistemically irresponsible people will use AI to double-down on their biases because they don't know any better.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

@Leo Gura Since you love game dev and stories from creative people, I encourage you to look into Edmund McMillen. He's one of the OGs on the indie game development scene.

He recently released a great game called Mewgenics which has been a big success, selling 1 million copies within 1 week. Very interesting behind the scenes topics in this interview:

I've been playing that game for the last month. It's so fucking good.

Edited by Sincerity

Words can't describe You.

Share this post


Link to post
Share on other sites

 

6 minutes ago, Leo Gura said:

@Joshe AI can be used responsibly, but mostly it won't be, because the people using it never cared about truth in the first place.

Epistemically irresponsible people will use AI to double-down on their biases because they don't know any better.

Do you think my question for Claude on Epistemic Rigour was Epistemicaly Irresponsible? This is an honest question not a need for validation. I want to use it better. Not to avoid thinking for myself. Because even if the answer comes from AI or from you from my POV are both external sources of knowing, I still need to verify any claim by direct experience. 

Share this post


Link to post
Share on other sites
9 minutes ago, Rafael Thundercat said:

Do you think my question for Claude on Epistemic Rigour was Epistemicaly Irresponsible?

No, unless you are mindlessly believing its answers.


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
16 minutes ago, Leo Gura said:

Epistemically irresponsible people will use AI to double-down on their biases because they don't know any better.

But that's not all it will do. 

In almost every conversation I have with AI, it corrects or checks me on something. It very often lets you know when you haven't reasoned well or have missed something. The average person coming in contact with that several times a week is a huge deal for epistemic responsibility.

IMO, ER will skyrocket with each new generation because they'll grow up in an environment where it's a common topic and a necessary skill. They'll be taught early how to verify answers, how to prompt effectively, detect hallucinations. Learning how to not be duped by AI is essentially learning epistemic responsibility.


What if this is just fascination + identity + seriousness being inflated into universal importance?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now