Joshe

Member
  • Content count

    2,423
  • Joined

  • Last visited

Everything posted by Joshe

  1. Leo

    Yeah, I also saw how you could use them to construct or reinforce any metaphysical belief you want. Recognized it my first time as well and was aware it would be an error to consistently plug Leo's metaphysics into the experience, because it was obvious I would just be constructing the whole thing until the construction was perfect and I believed it to be absolute truth. But I think psychedelics are usually only brainwashing tools for people who are motivated to interpret or derive meaning from the experiences. If you don't go looking for meaning, they're just very interesting experiences. But if you constantly try to find love, infinity, god, or whatever, you'll eventually find it, and they will be your own unconscious constructions.
  2. Leo

    up there = safe up there = right up there = valuable up there = untouchable To understand Leo's ego(and even our own), check out Karen Horney's "aggressive/expansive" personality in "Neurosis and Human Growth". It's basically how the self comes to organize around superiority. The Expansive resolves inner insecurity by becoming exceptional, superior, and invulnerable. They exhibit: • grandiosity • need to be exceptional • contempt for ordinary humanity • intolerance for weakness • drive toward greatness • belief in one’s unique gifts These coalesce to form what she calls the "Idealized Self", which becomes the organizing principle of the ego. The person constructs a grand image of who they are and the "Search for Glory" begins. From there, a protective mechanism called the "Pride System" forms, which is a collection of strategies used to reinforce and maintain the Idealized Self. The main strategy of the Pride System is contempt for inferiority and mediocrity. Contempt for inferiority/mediocrity starts off as an internal defensive mechanism to protect the Idealized Self, but quickly gets externalized onto people who are seen to be embodying the traits the Idealized Self rejects (weakness, confusion, mediocrity, inferiority, unintelligence). Without corrective measures, the Expansive will eventually find a way to place virtually all humans beneath them, because they will continuously run into humans that threaten their Idealized Self. Horney says they justify their contempt by framing it as "high standards". Which is very interesting because I just came across what seems to be a textbook example of that on Leo's blog. He recently shared a tragic story about a very lost and troubled young woman and his instinct was to use her as an example of human depravity and to equate her and most of humanity's behavior to that of an animal within the first 3 sentences. Then spent the rest of the post explaining that he only sees it this way because he has "high standards". The Pride System needs narratives that feel principled, not self-serving, so it comes up with things like: I reject mediocrity and demand excellence I'm principled, not arrogant I'm not contemptuous, I just see clearly and call things what they are The Expansive isn't conscious of any of this and they don't view themselves as egoic. They truly see themselves as principled, rigorous, strong, and as having the courage to look reality in the face and accept it. And there's definitely truth in that, but the problem is they see only what they want to see regarding themselves, which is that they are above and everyone else is below. Another aspect of the expansive type is that relationships with them are not collaborative/mutual, but hierarchical. To be vulnerable and a mere human engaging with another human on the same level implies "we are fundamentally the same kind of being", which threatens the Idealized Self. Just my theory.
  3. It depends on the conspiracy and what it offers. Most ordinary people get sucked in by intellectual intrigue. Things like curiosity, playing detective/cracking a puzzle, shiny object. Also, pre-existing motivations like distrust in institutions. Things start to turn weird when they admit they believe the theory and then get pushback. That's the point when identity enters the chat and often becomes "I can see what others can't", "people are asleep/sheep" and when motivated reasoning becomes dominant. And this arouses the rational person's identity defenses, who then feel compelled to flaunt their epistemic superiority, which humiliates the pre-rational and causes them to double down. It's often both who are using epistemic superiority to stabilize identity. "I can see what others can't" Rational people unwittingly play a large role in the epistemic breakdown.
  4. "Essentialism: The Disciplined Pursuit of Less" is a good one. Success is simple but not easy to reach. You can easily map out paths that lead to success. The hard part is being able to sustain trajectory while navigating an ever changing, unpredictable reality that demands action and persistent context switching across multiple logistical domains, and being able to absorb all the shock from that while avoiding drift. Goal → action → interruption → new demand → context switch → loss of momentum → restart → drift The best strategy is to design systems and environments that stabilize trajectory and minimize derailment, while excluding everything that isn’t essential. This is the thing to get handled. One foot in front of the other with the main thing remaining the main thing. That's it.
  5. If all distinctions are imaginary, then so is a rock.
  6. Cognition occurs within awareness. What you experience when you're not thinking is awareness with nothing flowing through it. The less cognition, the more noticeable awareness is. Awareness starts to feel infinite at low levels of cognitive activity, just like a dark room. Cognition is how we interface with the presumed infinity. But cognition has constraints. We can't compute 2x8 and 8x3 at the same time. To interpret "reality is infinite" and "reality is love" at the same time requires cognition. You can only focus on one or the other. Or train cognition such that both compress into a single gestalt. But this is modifying interpretation through cognition, not expanding awareness.
  7. It's supply and demand. Spiritual narratives stabilize the self, solving many serious problems at once. They make people feel: special purposeful enlightened morally superior connected to ultimate truth part of a cosmic story And more Belief's main function is stabilization. Most people, including intelligent ones, are very willing to adopt sets of beliefs if they're packaged coherently and can stabilize the self. Stabilizing the self is what's most important. Most seekers aren't really after truth. That's just post-hoc rationalization 99% of the time. The supplier must tend to the fact that the demand largely consists of unconscious stabilization needs and not a need for truth. This seems a hard thing to balance for an integrous teacher. Serving stabilization is antithetical to many of the truths an integrous teacher would want to teach. But if you don't serve ample stabilization, your audience will look elsewhere.
  8. Significant expansion is a pipe dream because human awareness has limited bandwidth. People mistake the practice of channeling awareness for expanding. You can practice a specific conceptual or perceptual skill so much that you gain compression artifacts substituting for a lot of granularity that you can experience as a gestalt, and those can feel profound, but that isn't "expanding" awareness. It's just focusing/channeling awareness on a specific thing. One could do nothing but train their mind to perceive reality as pure consciousness to the point they live from that recognition, and they could ponder the logical implications and work to integrate those, and this would still not produce "expanded" awareness. Trained awareness, yes, expanded, no. Conceptual spirituality (interpretation + metaphysical narrative) is held together by unconscious, accumulated logic and premises that, over time, coalesce into a metacognitive gestalt. The experience of the gestalt is often mistaken for "expanded" awareness or transcendence or awakening. If a practice is largely building and compressing an interpretive framework, then "more practice" doesn't strip away filters to reveal what already is. It's just replacing one set of filters for another. "The only way to verify my claims is to build the same compressed gestalt I have, at which point you'll agree with me."
  9. So far I see 5 types who regularly hate on AI. 1. Inexperienced users not interested in the tool 2. Conspiratorial thinkers 3. People who were already heavily focused on technological harm pre-AI 4. Conservatives 5. Intellectual egos (AI threatens the visible gap between expert and non-expert cognition) #5 is interesting. There are legit intelligent people out there who are fundamentally biased against AI just because it makes access to high quality information and thinking more generally available. They cloak their disdain in virtue (concern), but the truth is, AI is a massive threat to intellectual hierarchies and many identities built on intellectual superiority feel threatened by it. Just something to keep in mind when you watch a smart person bash the shit out of AI. If they advocate to slow down or use it less instead of education/AI literacy, their concern is most likely not epistemics. It's about the moat.
  10. Progress has been made. Gotta start somewhere.
  11. By not being passive. By interrogating and being critical of the output. It's a reasoning instrument, not an authority. The most powerful use of AI is interactive interrogation. The very fact that everyone is asking "How do we know if we can trust it?" means the idea of epistemic responsibility has gone mainstream. Humans will adapt, just like how they stopped trusting the first answer they saw on Google. AI will eventually normalize interrogating answers. If that happens, it would be one of the biggest shifts in the epistemic environment humans have ever experienced.
  12. But that's not all it will do. In almost every conversation I have with AI, it corrects or checks me on something. It very often lets you know when you haven't reasoned well or have missed something. The average person coming in contact with that several times a week is a huge deal for epistemic responsibility. IMO, ER will skyrocket with each new generation because they'll grow up in an environment where it's a common topic and a necessary skill. They'll be taught early how to verify answers, how to prompt effectively, detect hallucinations. Learning how to not be duped by AI is essentially learning epistemic responsibility.
  13. It's even wrong there IMO. Intellectual rigor is like a cognitive habit that remains relatively stable. Lazy thinkers of yesterday will usually be lazy thinkers tomorrow. People who used to search Google to find the first easiest answer are using AI just like that. AI doesn't create laziness - it just reveals the level of intellectual rigor that was already there. Serious thinkers still verify, but now they can do it faster and better. Also, people have always stopped double-checking once they feel confident in a source. Books, teachers, Google, Stack Overflow. Most people were already doing the minimum verification they felt necessary with Google. The verification process usually includes a ton of logistical busywork - not higher-order cognition. AI eliminates the bulk of that busywork, freeing serious thinkers up for higher reasoning. Evaluation is a higher form of cognition than logistical busywork. With AI, people will evaluate MORE, not less, because AI reduces the logistical friction. Even unserious thinkers will increase in higher level reasoning as a result of AI. The net effect on cognition will be positive, IMO. "Tools change the speed of thinking loops, but intellectual rigor comes from the mind running the loop. Reducing friction around information tends to increase the number of reasoning cycles people run. When the number of cycles increase, even people who are not highly rigorous will engage in more higher-level reasoning than they previously did." Someone who previously reasoned through 10 questions per week might now reason through 100.
  14. "Some" doesn't quite capture it. Almost everything it presents is sloppy as hell and misleading. Their main evidence that programming productivity is a mirage is that they tracked 16 SWEs about a year ago who did worse with agentic coding. They present weak, incredibly low sample size, non peer-reviewed studies as meaningful science. You shouldn't let a source like this sway you on anything. And of course brain activity decreases when using tools. That's the whole point of tools. lol. Historically, tools tend to expand cognition, not collapse it. Also, the video serves as a perfect example of doing exactly what it warns about: presenting information confidently enough that people assume it's correct via authority signals (studies). It's worth noting the video title is "LLMs can't reason - AGI impossible" while only 20% of the video is about that. The other 80% is ai-bad for humans. It has a propaganda feel to it.
  15. Yes, it’s called a prediction. Most people who have an opinion predict AI use will lead to cognitive atrophy. And maybe I am mistaken but it seems I’ve seen you lean heavily in that direction as well. I predict it won’t. Most people who predict it will are making several errors, the first of which is having failed to think seriously about the issue for an extended amount of time, and they just parrot that which is most intuitive. I can build a very strong case the common prediction is wrong, at which point confidence in it takes a nosedive. At least if you allow reason to update your models.
  16. Of course. I was referring to early stages of physical and cognitive development. But my main point was that the vast majority of humans would be made more cognitively capable by AI usage, not less.
  17. I used to think that too, until a few hours ago. I’ll explain why it’s not the case in a new post.
  18. Cognitive decline due to AI usage would mostly only occur in minds still under development. Even unserious thinkers who believe everything the AI tells them will be made more intelligent by AI, not less.
  19. Yes, dropping agents into an existing (brownfield) production codebase is very risky, but a lot of that risk can be mitigated with good strategy. Getting agents up and running in a brownfield codebase is largely a project management and system design problem that requires lots of iteration and creativity. The problem is largely solvable, you just have to figure it out. You could instruct CC to write comprehensive test coverage for every database operation. Before anything touches a production db, you'd have a test suite confirming everything works right in staging. That's just one guardrail you could build into your agentic workflow.
  20. You don't need 80% of that legacy PHP for this use case. Talk about bloated slop. It's funny you guys are invoking code quality while defending Wordpress-adjacent legacy PHP.
  21. I never liked Bootstrap either. I use Tailwind for every project. TW is easier to manage since v4 because you don't have to deal with the tailwind.config anymore. It's still a pain to set up with build tools like Vite but once you get it set up, just make a boilerplate and then every new project is smooth sailing. I built a large site with Astro recently. Astro is pretty damn nice.
  22. @Leo Gura I have no wish for AGI or care about any predictions because I lack the deep technical expertise to have an opinion. I'd argue against the cognitive decline point and say it's true for the majority, but not all. But I was saying we're at a point where intelligent AI orchestration can build complex software and the barrier to entry has recently dropped far, far below what it was 6, or even 3 months ago. An intelligent person could now use AI to brute force most existing software because 80-90% of software exists to move data around, display it, let users interact with it, etc, and AI understands all these patterns very well. This very forum is just a data model with a UI on top. It wouldn't be ambitious to build a better version in a week. I can see where you're coming from if you're looking at LLMs through the lens of chatbots. It does seem chatbots in general have stagnated. But the most significant developments happening now are with agents, and they're huge. And they definitely shouldn't be conflated with chatbots.
  23. A working prototype of Figma built in a weekend with strategic AI orchestration has no value? lol. If it allows millions of people to stop paying for Figma, it has a ton of value. Also, my main point wasn't about the application itself so much as it was the implication of what's now possible. Let's not forget that almost all software starts out as slop and is refined over time, regardless who makes it. "Done is better than perfect". "Move fast and break things". Whoever created that Figma clone could spend the next 6 months refining it, and a billion dollar company could stand to lose a lot. That's a real threat. And it's not just one guy out there working on a figma clone. The possibility space has changed drastically with Claude Opus 4.6. If you haven't been using Claude since the beginning and haven't been tracking its capabilities, you would have to defer to your intuition or some talking head about what's possible. The only way to know what's possible is to actually use it and judge it from there. Most of the the failure modes that existed 6 months ago don't exist anymore, which you could only know by using it.
  24. I have a similar dynamic with my mom. Me and my mom are just so cognitively incompatible in a fundamental way. The things I want to talk about, she has zero interest in and vice versa. I'll mention something about an underlying pattern or mechanism and she goes silent. Then, she brings up some inconsequential thing she noticed in the environment and I just can't stick with it for long. I concluded that all you can do is not give into resentment, drop expectations, and don't expect that person to provide social fulfillment. Sucks when it's your mom, but it is what it is. My mom does the same thing with not wanting to recognize the value of my advice. This actually fits the profile of the ISTJ. They don't track who is the smartest or even who has the best record of being right - they track who has the most standing. Seniority, credentials, age, etc. They're oriented towards what they think is "established order". They don't evaluate ideas on merit, but rather on who is saying them. Also, I noticed in my mom there's a defensive mechanism involved in ignoring me. If a neighbor gives good advice and she follows it, it costs her nothing. But if her son gives good advice and she follows it, that involves subordinating herself to someone she feels should be below her in the hierarchy. This has been really frustrating to deal with, especially in critical moments involving her health. It's a very frustrating incompatibility. You're not alone.
  25. All good! 100%. You'd have to be very careful in that scenario.