-
Content count
2,412 -
Joined
-
Last visited
About Joshe
-
Rank
- - -
Personal Information
-
Location
United States
-
Gender
Male
Recent Profile Visitors
6,098 profile views
-
But that's not all it will do. In almost every conversation I have with AI, it corrects or checks me on something. It very often lets you know when you haven't reasoned well or have missed something. The average person coming in contact with that several times a week is a huge deal for epistemic responsibility. IMO, ER will skyrocket with each new generation because they'll grow up in an environment where it's a common topic and a necessary skill. They'll be taught early how to verify answers, how to prompt effectively, detect hallucinations. Learning how to not be duped by AI is essentially learning epistemic responsibility.
-
It's even wrong there IMO. Intellectual rigor is like a cognitive habit that remains relatively stable. Lazy thinkers of yesterday will usually be lazy thinkers tomorrow. People who used to search Google to find the first easiest answer are using AI just like that. AI doesn't create laziness - it just reveals the level of intellectual rigor that was already there. Serious thinkers still verify, but now they can do it faster and better. Also, people have always stopped double-checking once they feel confident in a source. Books, teachers, Google, Stack Overflow. Most people were already doing the minimum verification they felt necessary with Google. The verification process usually includes a ton of logistical busywork - not higher-order cognition. AI eliminates the bulk of that busywork, freeing serious thinkers up for higher reasoning. Evaluation is a higher form of cognition than logistical busywork. With AI, people will evaluate MORE, not less, because AI reduces the logistical friction. Even unserious thinkers will increase in higher level reasoning as a result of AI. The net effect on cognition will be positive, IMO. "Tools change the speed of thinking loops, but intellectual rigor comes from the mind running the loop. Reducing friction around information tends to increase the number of reasoning cycles people run. When the number of cycles increase, even people who are not highly rigorous will engage in more higher-level reasoning than they previously did." Someone who previously reasoned through 10 questions per week might now reason through 100.
-
"Some" doesn't quite capture it. Almost everything it presents is sloppy as hell and misleading. Their main evidence that programming productivity is a mirage is that they tracked 16 SWEs about a year ago who did worse with agentic coding. They present weak, incredibly low sample size, non peer-reviewed studies as meaningful science. You shouldn't let a source like this sway you on anything. And of course brain activity decreases when using tools. That's the whole point of tools. lol. Historically, tools tend to expand cognition, not collapse it. Also, the video serves as a perfect example of doing exactly what it warns about: presenting information confidently enough that people assume it's correct via authority signals (studies). It's worth noting the video title is "LLMs can't reason - AGI impossible" while only 20% of the video is about that. The other 80% is ai-bad for humans. It has a propaganda feel to it.
-
Yes, it’s called a prediction. Most people who have an opinion predict AI use will lead to cognitive atrophy. And maybe I am mistaken but it seems I’ve seen you lean heavily in that direction as well. I predict it won’t. Most people who predict it will are making several errors, the first of which is having failed to think seriously about the issue for an extended amount of time, and they just parrot that which is most intuitive. I can build a very strong case the common prediction is wrong, at which point confidence in it takes a nosedive. At least if you allow reason to update your models.
-
Of course. I was referring to early stages of physical and cognitive development. But my main point was that the vast majority of humans would be made more cognitively capable by AI usage, not less.
-
I used to think that too, until a few hours ago. I’ll explain why it’s not the case in a new post.
-
Cognitive decline due to AI usage would mostly only occur in minds still under development. Even unserious thinkers who believe everything the AI tells them will be made more intelligent by AI, not less.
-
Yes, dropping agents into an existing (brownfield) production codebase is very risky, but a lot of that risk can be mitigated with good strategy. Getting agents up and running in a brownfield codebase is largely a project management and system design problem that requires lots of iteration and creativity. The problem is largely solvable, you just have to figure it out. You could instruct CC to write comprehensive test coverage for every database operation. Before anything touches a production db, you'd have a test suite confirming everything works right in staging. That's just one guardrail you could build into your agentic workflow.
-
You don't need 80% of that legacy PHP for this use case. Talk about bloated slop. It's funny you guys are invoking code quality while defending Wordpress-adjacent legacy PHP.
-
I never liked Bootstrap either. I use Tailwind for every project. TW is easier to manage since v4 because you don't have to deal with the tailwind.config anymore. It's still a pain to set up with build tools like Vite but once you get it set up, just make a boilerplate and then every new project is smooth sailing. I built a large site with Astro recently. Astro is pretty damn nice.
-
@Leo Gura I have no wish for AGI or care about any predictions because I lack the deep technical expertise to have an opinion. I'd argue against the cognitive decline point and say it's true for the majority, but not all. But I was saying we're at a point where intelligent AI orchestration can build complex software and the barrier to entry has recently dropped far, far below what it was 6, or even 3 months ago. An intelligent person could now use AI to brute force most existing software because 80-90% of software exists to move data around, display it, let users interact with it, etc, and AI understands all these patterns very well. This very forum is just a data model with a UI on top. It wouldn't be ambitious to build a better version in a week. I can see where you're coming from if you're looking at LLMs through the lens of chatbots. It does seem chatbots in general have stagnated. But the most significant developments happening now are with agents, and they're huge. And they definitely shouldn't be conflated with chatbots.
-
A working prototype of Figma built in a weekend with strategic AI orchestration has no value? lol. If it allows millions of people to stop paying for Figma, it has a ton of value. Also, my main point wasn't about the application itself so much as it was the implication of what's now possible. Let's not forget that almost all software starts out as slop and is refined over time, regardless who makes it. "Done is better than perfect". "Move fast and break things". Whoever created that Figma clone could spend the next 6 months refining it, and a billion dollar company could stand to lose a lot. That's a real threat. And it's not just one guy out there working on a figma clone. The possibility space has changed drastically with Claude Opus 4.6. If you haven't been using Claude since the beginning and haven't been tracking its capabilities, you would have to defer to your intuition or some talking head about what's possible. The only way to know what's possible is to actually use it and judge it from there. Most of the the failure modes that existed 6 months ago don't exist anymore, which you could only know by using it.
-
I have a similar dynamic with my mom. Me and my mom are just so cognitively incompatible in a fundamental way. The things I want to talk about, she has zero interest in and vice versa. I'll mention something about an underlying pattern or mechanism and she goes silent. Then, she brings up some inconsequential thing she noticed in the environment and I just can't stick with it for long. I concluded that all you can do is not give into resentment, drop expectations, and don't expect that person to provide social fulfillment. Sucks when it's your mom, but it is what it is. My mom does the same thing with not wanting to recognize the value of my advice. This actually fits the profile of the ISTJ. They don't track who is the smartest or even who has the best record of being right - they track who has the most standing. Seniority, credentials, age, etc. They're oriented towards what they think is "established order". They don't evaluate ideas on merit, but rather on who is saying them. Also, I noticed in my mom there's a defensive mechanism involved in ignoring me. If a neighbor gives good advice and she follows it, it costs her nothing. But if her son gives good advice and she follows it, that involves subordinating herself to someone she feels should be below her in the hierarchy. This has been really frustrating to deal with, especially in critical moments involving her health. It's a very frustrating incompatibility. You're not alone.
-
All good! 100%. You'd have to be very careful in that scenario.
-
Of course you need a human to manage things. My entire comment made that clear. But there are many people with little technical knowledge making money from AI-coded solutions right now. Just because you aren't doing it doesn't mean it's not happening. The video I posted shows a 10-year SWE transitioning from "not letting AI write any of my code" to "AI writes most of my code and has 10x'd my output". Watch that video and let me know what you think.
