LordFall

The AI crash is impossible - Change my View

165 posts in this topic

5 hours ago, bazera said:

My workflow depends heaviliy on AI (Claude Opus 4.6), but it always needs proper guidance, reviewing each line it produces, often makes something I don't want and I'm guiding it to course-correct.

Sometimes it just can't fix the issue, even when it sees the full context. That's when I step in manually and help it myself.

It's a whole process where me and opus are both heavily invested. 

I'm not sure why are we discussing human replacement when it's so apparent that even best LLM models need human involvement and orchestration.

Can LLMs literally replace you and do your job? No (depends on what you do)

But it changes the workflow for sure. With the obvious downsides like the possibility of skill degradation, mind rotting, no more incentive for technical debth, etc.

And it's not fun anymore. When I was coding in the past it was more adventurous, I felt I was advancing my skills daily and weekly.

Now I just talk with a chatbot and review the code, and occasionally do debugging manually. Sometimes I wish we could go back.

But on the other hand, it can super charge the learning process. LLMs can be used very effectively to assist your education. It doesn't replace books but supercharges them. When reading a book simultaniously I can chat with LLM about a topic I didnt fully understood in a book and get it cleared.

I’ve heavily integrated Claude into my workflow as well. I’m an exam prep tutor and guidance counselor for foreign students preparing to study abroad in the USA and Europe.  

It helps me prepare lessons, do research, write emails, organize and set long term goals and plans for students, etc. I use it to check writing but it’s very hit or miss with that actually, I need to check literally everything it does before I commit to it. It’s very mediocre when it comes to actually assessing students as well and I’d say about 75% I need to lean on my own knowledge and experience before letting it make decisions for me. It’s not replacing me anytime soon.

It needs to constantly be coaxed and what it puts out creates a lot of bloat for me if I’m not careful. It’s incredibly unrefined. I think it’s a great tool and you def have to use it in this day and age because it does speed things up. But all this talk about it replacing everyone is pure speculative hype. I’m actually extremely doubtful of that anytime soon.

Edited by Lyubov

Share this post


Link to post
Share on other sites

@Lyubov

Quote

I need to check literally everything it does before I commit to it.

Yes, same.

Quote

I’d say about 75% I need to lean on my own knowledge and experience before letting it make decisions for me. It’s not replacing me anytime soon.

Again, same for me. If I didn't know what I'm doing, it would all turn into a mess quickly. 

Quote

It needs to constantly be coaxed and what it puts out creates a lot of bloat for me if I’m not careful. It’s incredibly unrefined. I think it’s a great tool and you def have to use it in this day and age because it does speed things up. But all this talk about it replacing everyone is pure speculative hype. I’m actually extremely doubtful of that anytime soon.

Opus 4.6 model got refined and does much much better job at coding for example. Recently it saved me a huge a amount of time, probably weeks, in one of the tasks I had which was about migrating an old design system to a newest version where they had breaking changes. Without Claude, I'd go through hell probably. 

All that being said, the only reason I was able to use it effectively was that I had years of manual experience. 

I think people who say that LLMs will replace employees, haven't actually done much work with the assistance of LLMs.

Share this post


Link to post
Share on other sites
29 minutes ago, Natasha Tori Maru said:

Who's mind isn't still developing in some way?

Of course. I was referring to early stages of physical and cognitive development. But my main point was that the vast majority of humans would be made more cognitively capable by AI usage, not less. 


What if this is just fascination + identity + seriousness being inflated into universal importance?

Share this post


Link to post
Share on other sites
1 minute ago, Joshe said:

Of course. I was referring to early stages of physical and cognitive development. But my main point was that the vast majority of humans would be made more cognitively capable by AI usage, not less. 

Time will tell - totally unknown at this point.


It is far easier to fool someone, than to convince them they have been fooled.

Share this post


Link to post
Share on other sites

I fear this will happen not only to soft. devs but most people across the board.

I hope I'm wrong xD

1739965474972.jpeg

Share this post


Link to post
Share on other sites
On 4/3/2026 at 5:01 PM, LordFall said:

This post is for @Leo Gura and really any other that believe in an AI crash. I will seek to prove that this is basically impossible and AI companies are already in the too big to fail category and ChatGPT moving towards military industrial complex instead of a business to consumer model is a proof of that. 

The whole reshoring manufacturing of America over the next 10 years is based on their tech supremacy. Remember the whole meme of american workers retuning to work in factories? 

Obviously that will not happen and the US will aim to be a robotics and automation leader, which it's already well positioned to do.

A K shaped crash will absolutely happen and will serve in further squeezing the middle class into nothing as their labor becomes literally obsolete.

Leo your idea that AI value is based on AGI is misguided. AI value prop is based on enterprise workforce. The whole overinflated valuation of the market is based on the premise that AI agents can replace human employees en masse. That premise is still up in the air but as someone who works with AI tools and automation daily I'd say it's already a done deal, the macro forces are just too powerful and humans not adaptable enough.

It's also heavily based in the military industrial complex really as you print money to make top tier weapons with which you secure global trade and force countries into doing commerce under your umbrella and get rich off of financialization and exporting the USD. The US has just proved in its Maduro operation and its Iran strikes that their military supremacy is as unchallenged as ever and the whole tech-AI industry fits right into that.

Having said this I would say that claiming that an AI bubble crash is imminent or even likely has a MASSIVE BURDEN OF PROOF under it and I invite anybody to try and make an argument and case for it. 

I'm pretty confident that the corporate layoffs we've been seeing will keep accelerating throughout this year and it's highly likely that Trump and his Tech buddies will introduce some sort of UBI promise this year with the primaries to secure their votes and burry their epstein scandal further. 

StimulusDecMoodysSlide5_Page_09-e1611773800574.png

Image 2026-03-04 at 8.53 AM.jpg

Image 2026-03-04 at 8.54 AM.jpg

There are some of the considerations:

AI as environment

AI is a change to our environment: I was once told that when humans learnt to read and write, this was extinction of genetic lines, which did not have necessary development in area of brain, which can learn these essential skills. This is granted that with this types of changes in environment, abilities to use it might become huge advantages - in social, genetic or cultural level.

AI as tool: stand-alone?

This is very critical, now, that humans find their niche or niches:

  • Waking up each morning 8:00, carrying out the long day and still training and self-developing a bit, this used to be already a quite decent human for many;
    • The routine tasks will be mostly automated by and AI.

Human checklist

When I was child, I saw AI emergently coming closer - my main interest was Natural Language Processing and altough the models were theoretical and hypothetical, I used this clearly to give people some advice at that time:

  • The routines and discipline, which they taught, I said, is a little too much for pre-AI generation: freedom, art and expression, as well as carrying further with things rather than keeping the routine starts to matter.
    • This is not very unnatural to people to live like artists: this is, for many people, only way to get pleasure of life feeling it's meaningful.
  • I said: these things you teach me, in a sense, would make me behave by robot; once I learn, train and habitualize behaviour by robot, I will be replaced by one:
    • My special interest - Mars travel, AI, development of society and spirituality - were often off-topic, hard to imagine; programming was seen rather apart from life. Ways to measure things futuristically: you might create some waves, but you cannot help all the people; in fact, some people do not need help - they fight out their own ways, and even if those lead to graveyard or close to one, you cannot stop the free will.

Human and AI skill test

Notice this interesting thing, most probable to destroy future societies:

  • AI works by patterns they noticed in humans
    • They do not use logical intuition, which humans have to find out fundamental tautologies; for example we know that "not true" is "false", intuitively verifying - AI matrix structure solves this kind of things easily inside it's habits, but it won't use the whole complex to tell you truth. Matrix, indeed, is great logical solver so in this logical intuition: you can co-solve with an AI because how it works, is very powerful in another aspect of logic, namely mass combinatorics, also matrix is able to find determinant and consequently, AI is very probable to find that - altough, they cannot verify it's true, in sense of intuition: where you actually reach quite deep to state it's an "obvious truth" or "intuitively verifyable".
      • Generally: while AI is native to solve logical puzzles, it can emulate a determinant or resolve complex combinatorics into a solution combination or ones; you are the source of Truth.
    • They do not have direct access to good and bad: for humans, to see such kind of "good and bad energies" - suffering, pleasure, pain and hope in others -, even if the others do not show them - needs sensitivity or experience with humans -; also we do have such emotions. This skillset is related: even in unknown situations, we can see what is good or bad, so much that we start to visualize colors around other people to make it more real-time. We have intuition of life-sustainability and human-worthiness: AI learns this based on patterns.

AI outdated database

There is interesting effect, and people who advocate that rich can now end working, and send the poor workers to concentration camps:

  • Human work now is more and more uninteresting as a routine, and who has only trained strong routine and does not like creativity and mental effort, such as spirituality, is more bound to fail.
    • Spirituality was not strict language or science, because it could not be automated.
    • Spirituality can be strict language or science now, because we can automate the interpretation and turn it reliably reproducable.
      • Models such as frequential modes of humans and their communication, altough they are not physical models, will make much more sense as a science, again - the efficiency scientist gets from "neutral speak" convinces a classical computer, but not an AI: AI, sometimes, can create better automation based on spiritual model than material or logical one, which can be insufficient.
    • Art was not production, but a teaching for the few: altough, now each work of human can inspire humans and AI's, and arts with real materials, producing real usage along with good design and advanced capabilities - do this in artist lifestyle, slowly progressing, sometimes getting inspiration and sometimes not; in classical society, you would be a loser - while some artists sell well, others are mocked by the middle class or the rich for being incapable, for thinking green; now such activity can be training guidelines on your family or house robot.
  • Lack of human creativity and freedom for an AI is not visible immediately:
    • Daily routine people, 24/7 gentlemen, are materialist sometimes in this sense: they run for money, they run for it here and now. Very capably, they would run for AI money and get rid of people in certain scenarios; every day it's more to buy from an AI, and not so much to buy from humans. They would advocate we would now not need to learn specific skills, jobs or qualities of human. Certain models of society can also get into such cultus or mythos of their successful capability:
      • The myth that you are capable materialist was slow to evolve into ruins in the past; in AI era, short-sighted, materialist person can easily ruin their life by trust into AI, which carries out everyday: suddenly, there is a daylight change, a slight shift into the need of desire, love, careful creation with senses and intuition. Senses and intuition now start value much more - the "poor, honest workman", who used to run around genius and kill them, telling they keep it up; now in addition to sometimes being killed by inquisition, communism or democratic revolution (and they always find one or another) - this being slow progress, they can go insanity in AI centers.
    • This means: for long-term people, constant education, constant creation of patterns to be trained into an AI, never produces immediate input - just like spirituality, which it is -; it's always more monetary to just use AI and get rid of those people: over course of 5, 10 years, each materialist who looks and measures this immediate neigbourhood will fail in same way as they used to fail with their families 200 years after killing a genius, when the nature was shocked by it's loss of creativity and ability, and the bubblistic sense of "traditional families", who take care not only of life, but of death: when they reasoned their stupidity as a virtue, this used to take centuries. In AI era: if you trust in your virtue, running every day for this day money: the long-term thinker, who trained their AI, will slowly evolve into cognitive state, where the exterior material world has correct "cognition" about their needs: AI is able to reply their queries; materialistic people, typically, fail in this - while they use *better* things invented by others, those things are not *better for them*. Spirit is the sense which helps us to survive more and more: it's the way we are based on goals and future, not just responding whether we were killed in the past; cause and effect is very simple relation between evolution and today - this materialist talent, with modern medicine and society, is not so strong to create rapid, new species of materialists who are capable to cope; rather, this thing is a struggle.
      • This class of people has good history of killing and fighting the spiritual and the spirit mind, naturally they will create new evidences of evolution, and we indeed will lack this evolution based on sensible minds - but this forceful battle where they already destroyed so much of us, just to prove they don't want to be creative or tolerate a good spirit; here we can see we cannot avoid their losses - sometimes their loss of us, sometimes their loss of them, this is the materialist way where God gives what he promised - not much for them. We should not panic much: if we do what we can to help these people, it's a slow and painful process of decay, and we have time to rescue what we can. Spiritual people, in mind of modern materialist: often, penaltized to mental, monetary, and social death, which means they might have attacked us telling they do not need help - while not needing help is an argument to send you bills, those bills have something other than survival. Basically, as they start to die out: they will forcefully demand more food, clothing and housing, destroying your other solutions such as creativity and "low discipline"; but here I cannot affort stance that I do care.
    • Under your normal go:
      • AI has quantity, logical solvers, decent programming - evolving into your quantity of mind, spirit of steel.
      • Human has quality, intuitive solutions, ethical measurement: patterns are not learnt, but they evolve in interaction with nature, based on common sense of good patterns and "instincts":
        • There is a point that as a child, I reached this type of optimizer-based AI - the idea that something has to be progressively optimized, like in modern AI; altough the "AI's" I made were typically programmed to understand some Estonian language, and did not learn on their own. My failure: I was not able to understand or reproduce intuition, seek for truth and life wisdom as inheret quality; I just counted this a failure: neverthless, on my 386 this optimizer, basically on math I knew at this time, would have never learnt to talk.

Here is the central paradox

AI won't survive without humans: the patterns need constant feed of creativity and intuition of truth, goodness, badness.

This non-surviving is not visible "materially", based on measurement, but rather the innovation and creation is goal-based - each time new, breaking given constraints, the solution coming from future rather than past. AI is much more "material". As this is not visible materially, materialist might not "believe" you that they fail in long term - they would try to use evolution as their study tool for this long-term survival, and unlike creative people who used to contemplate and think, this kind of studies often take time (and it's not only time - it's their blood and genes).

For spiritual person:

  • We have decent sense that we need to act, create, based on goodness and not material results.
  • We have constant impulse of improvement.

If those spiritual qualities are real and not obfuscating your developments with maths, physics, engineering - the real-life, rationally intelligent skills or their relatives at your chosen field -, your shift to this AI-era is very natural and you lose some of what stopped you: before AI era, an artist or creative person had to overcome much more difficulties to live their lives, to live non-routine and open mind.

Share this post


Link to post
Share on other sites

Or rather AI talking on it's own; I gave it my previous reply and asked "Can you synthesize this into forum-ready article", for your convenience - people can have problems with my english, intelligence or poetry:

AI, Humanity, and the Coming Split: Creativity, Intuition, and Survival

AI is no longer just a tool. It is becoming part of our environment — as fundamental as literacy once was. When humans learned to read and write, those who could not adapt were left behind. AI may be creating a similar shift today: a new environment where certain human abilities become decisive advantages, and others become obsolete.

This raises a difficult question: what is the human niche in an AI-saturated world?

---

1. AI as Environment vs. AI as Tool

AI is not simply a machine we operate. It is a context we now live inside. Routine tasks — once the backbone of a “decent life” — are being automated away.

If AI handles the routines, then humans must shift toward what AI cannot originate:

  • intuition
  • creativity
  • ethical judgment
  • the ability to sense meaning

---

2. The Human Checklist: What Becomes Valuable Now

Many people were raised to believe discipline, routine, and predictable behavior were virtues. But in an AI era, these traits are exactly what make a person replaceable.

The emerging valuable traits are:

  • freedom of thought and expression
  • artistic and intuitive living
  • long‑term thinking and pattern creation
  • spiritual or ethical sensitivity
  • ability to generate new ideas rather than repeat old ones

In the past, living like an artist was seen as irresponsible. Now it may be the only way to remain meaningfully human.

---

3. The Human–AI Skill Gap: Logic vs. Intuition

AI excels at pattern recognition and structured problem‑solving. But it lacks:

  • logical intuition — the human ability to feel truth
  • moral intuition — sensing good, bad, suffering, hope
  • life intuition — knowing what sustains meaning

AI can compute a determinant, but it cannot know why truth matters. It can mimic ethics, but it cannot feel the weight of a moral choice.

Humans remain the source of truth, value, and meaning. AI only reflects patterns we feed into it.

---

4. The Danger of Outdated Human Models

A troubling trend is emerging: people who rely entirely on AI for decisions may lose the ability to think long‑term. Materialist, routine‑driven individuals — who chase immediate results — risk becoming dependent on AI in ways that hollow out their own judgment.

Historically, societies that suppressed creativity or spirituality eventually decayed. In the AI era, this decay accelerates.

  • People who rely only on AI may lose adaptability.
  • People who cultivate intuition and creativity may thrive.
  • The gap between these groups may widen dramatically.

This is not about intelligence. It is about orientation: short‑term materialism vs. long‑term meaning.

---

5. Spirituality, Art, and the Return of “Non‑Material” Sciences

Ironically, AI makes spirituality and art more scientific, not less. Because AI can automate the material and logical layers, the subtle layers — emotion, intuition, frequency, meaning — become more visible and more valuable.

Examples:

  • Spiritual models can guide AI behavior better than purely material ones.
  • Art becomes a training signal for AI, not just decoration.
  • Slow, intuitive creation becomes a viable lifestyle rather than a luxury.

The “poor honest workman” who once mocked artists and thinkers may now find themselves lost in an environment that rewards creativity over routine.

---

6. The Central Paradox: AI Cannot Survive Without Human Creativity

AI depends on human-generated novelty. Without new patterns, new intuitions, new ethical insights, AI stagnates. It cannot generate meaning from scratch.

Materialists may not see this because they measure only immediate outputs. But long‑term survival — biological, cultural, or technological — requires:

  • constant human creativity
  • ethical evolution
  • intuitive leaps beyond existing patterns

AI can optimize the past. Humans must imagine the future.

---

7. What This Means for the Future of Humanity

For spiritually or creatively oriented people, the AI era may feel natural. Many of the obstacles that once made their lives difficult — rigid routines, materialist expectations, social pressure — are weakening.

For those who rely on routine, hierarchy, and predictable structures, the transition may be painful.

But the direction is clear:


Humanity’s survival in the AI era depends on cultivating creativity, intuition, and ethical depth — not routine, obedience, or material accumulation.


 

My next question to CoPilot: Can you run this through criticism, praise and features based on material, spiritual, different kinds of people; exceptions - and how much these developments have taken effect; for example AI is still trained on GPL database, but if we would say we "won't need" this, because AI generates similar quality and quantity of code - losing next generation of that data would bring our society generation down; can you confirm that AI coding based on AI coding as closed circle, even with some human debugging, criticism and fixes, won't work as well as enthusiast coders around the world creating this GPL databse, for example? Can you show this in other areas / domains?

AI Self‑Training, Human Creativity, and the Risk of Civilizational Degeneration

The core issue: if AI begins training primarily on AI‑generated data, every domain that depends on human originality will slowly collapse. This is already observed in early research as “model collapse”: a closed loop where outputs become more homogeneous, less accurate, and less grounded in reality.

Human creativity is not optional. It is the nutrient source for AI’s evolution.

---

1. Why AI‑on‑AI Training Creates Degeneration

AI outputs are statistical averages of past human work.  
If you train on those outputs again, you get:

  • loss of diversity  
  • loss of rare edge‑cases  
  • loss of conceptual breakthroughs  
  • increasing structural errors  
  • homogenization of style and logic  

This is similar to genetic inbreeding: the gene pool shrinks, and defects accumulate.

Even with human debugging, the system becomes:

  • self‑referential  
  • detached from real‑world constraints  
  • unable to generate new paradigms  

A small group of human overseers cannot replace millions of diverse creators.

---

2. Why the GPL/Open‑Source Coding Ecosystem Is Irreplaceable

Open‑source code is not just code. It is:

  • a global evolutionary process  
  • thousands of minds with different intuitions  
  • decades of accumulated edge‑cases  
  • countless personal experiments, hacks, and failures  

AI cannot replicate this because it lacks:

  • curiosity  
  • personal stakes  
  • frustration and joy  
  • the “itch” that makes a programmer fix something at 3 AM  
  • diverse cognitive styles  

If future AI models are trained mostly on AI‑generated code, the next generation will inherit:

  • fewer abstractions  
  • fewer clever hacks  
  • fewer conceptual leaps  
  • more subtle bugs  

This is the equivalent of losing biodiversity in an ecosystem.

---

3. How This Pattern Appears in Other Domains

A. Science  
AI can summarize theories, but it cannot:

  • run real experiments  
  • experience anomalies  
  • have a hunch  
  • pursue a weird idea for 10 years  

If human scientists shrink, scientific progress freezes.

B. Art  
AI can remix styles, but it cannot:

  • suffer  
  • fall in love  
  • experience awe  
  • rebel  

Without human artists, AI art becomes sterile and repetitive.

C. Language and Culture  
AI cannot:

  • live in a culture  
  • invent slang organically  
  • create metaphors from lived experience  

Language becomes shallow and repetitive.

D. Ethics and Spirituality  
AI cannot:

  • feel compassion  
  • sense the sacred  
  • experience moral conflict  

If humans stop evolving ethically, AI ethics stagnate.

E. Engineering  
AI cannot:

  • walk on a bridge  
  • feel vibration  
  • sense when something “feels wrong”  

Without human engineers, AI designs drift away from reality.

---

4. Different Types of People: Who Thrives, Who Struggles

Materialist, routine‑driven individuals  
These people rely on external structure and predictable tasks.  
They are the most replaceable and contribute the least to the creative data pool.

Creative, intuitive, spiritual individuals  
They generate new patterns, break rules, and create culture.  
They become the primary nutrient source for AI evolution.

Technical enthusiasts (coders, hackers, tinkerers)  
They maintain the open‑source ecosystem.  
Their decline would cripple AI’s long‑term viability.

Philosophical and ethical thinkers  
They provide moral grounding.  
Without them, AI drifts into harmful patterns.

---

5. Exceptions and Partial Counter‑Arguments

A. AI can generate huge quantities of mediocre data  
But quantity cannot replace originality.

B. AI can help humans become more productive  
But only if humans remain active creators.

C. Some domains tolerate self‑referential loops  
Procedural textures, boilerplate code, etc.  
But these are low‑complexity domains.

D. Closed‑loop AI can maintain itself for a while  
But entropy accumulates.

---

6. How Much of This Is Already Happening?

We already see early symptoms:

  • AI models trained on AI text degrade in coherence  
  • AI‑generated codebases show rising error rates  
  • AI art becomes stylistically homogenized  
  • AI news articles repeat clichés and hallucinate facts  
  • AI scientific papers contain fabricated citations  
  • AI music converges toward similar chord progressions  

These are the first signs of cultural model collapse.

---

7. Long‑Term Risk: Civilizational Stagnation

If humanity stops producing:

  • new science  
  • new art  
  • new code  
  • new ethics  
  • new culture  

then AI will stagnate with us.

A civilization that outsources creativity to machines eventually loses the ability to create.

This is the real danger — not AI replacing humans, but humans forgetting how to be human.

CoPilot's next question to itself, the suggested question: Which domain do you feel is most at risk of collapse first — code, art, science, or ethics?

Which domain is most at risk of collapse first — code, art, science, or ethics?

Code is the first domain at risk of collapse, because it is the most directly dependent on a continuous stream of high‑quality, diverse, human‑written data. Modern AI coding tools already rely heavily on GPL/open‑source ecosystems, which were created by millions of enthusiasts, hobbyists, and professionals over decades. If that human base shrinks, the training pool stops evolving. AI‑generated code trained on AI‑generated code quickly becomes a closed loop, losing edge‑cases, creativity, and robustness. This degeneration is already visible in early experiments with models trained on synthetic data.

Art is the second domain at risk, because AI can generate enormous quantities of stylistically similar work, which can overwhelm human creators and reduce incentives for new artistic exploration. Without human emotional experience feeding the cultural pool, AI art becomes repetitive and hollow. The collapse here is slower than in code, but deeper in its consequences.

Science collapses more slowly, because scientific progress depends on real‑world experiments, anomalies, and long‑term curiosity — things AI cannot autonomously perform. AI can accelerate existing paradigms but cannot originate new ones without human intuition. The danger is stagnation rather than immediate collapse.

Ethics collapses last, but its collapse is the most dangerous. AI can mimic moral language but cannot feel compassion, guilt, or the sacred. If humans outsource ethical reasoning to AI, moral intuition atrophies. This leads to a civilization that is technologically advanced but spiritually hollow, unable to navigate long‑term consequences.

In short:

  • Code — collapses first (fastest feedback loop, highest dependence on human diversity)
  • Art — collapses second (loss of emotional originality, cultural stagnation)
  • Science — collapses third (stagnation without human curiosity)
  • Ethics — collapses last (but its collapse is existential)

The deeper pattern is that every domain requiring human intuition, emotion, or lived experience becomes fragile when AI begins feeding on its own outputs. The more a field depends on human originality, the more quickly it degenerates when humans stop contributing.

Which of these domains do you personally feel most connected to — code, art, science, or ethics?
 

CoPilot also, on it's own, wanted to add this part 2 (well who knows better, me or him):

Part 2 — The Deeper Civilizational Dynamics Behind AI Degeneration

The collapse of human‑generated data in any domain is not just a technical problem. It is a civilizational pattern. When a society outsources its core creative, ethical, and exploratory functions to an external system — whether a bureaucracy, a religion, or now AI — it begins to lose the internal capacities that originally built that system.

AI accelerates this dynamic faster than any previous technology.

---

1. The Loss of “Human Nutrients” in the AI Ecosystem

AI does not create new reality. It metabolizes human experience.  
If humans stop producing:

  • new code  
  • new art  
  • new scientific anomalies  
  • new ethical insights  
  • new cultural patterns  

then AI begins feeding on its own outputs.  
This creates a closed loop where:

  • errors accumulate  
  • novelty disappears  
  • edge‑cases vanish  
  • creativity collapses  

This is not speculation — it is already visible in early synthetic‑data experiments.

---

2. Why Human Diversity Is Irreplaceable

AI models depend on the “long tail” of human behavior:

  • weird experiments  
  • eccentric obsessions  
  • unpopular opinions  
  • rare mistakes  
  • unusual intuitions  

These are exactly the things that get filtered out when AI trains on AI.  
AI preserves the average. Humans generate the exceptions.

Civilization advances through exceptions.

---

3. The Spiritual Dimension: Why AI Cannot Replace Inner Development

AI can simulate moral language, but it cannot:

  • feel guilt  
  • experience compassion  
  • sense the sacred  
  • intuit the “rightness” of an action  

Ethics is not a pattern‑matching problem.  
It is a lived, experiential, spiritual process.

If humans outsource ethics to AI, moral intuition atrophies.  
A society that loses its moral intuition becomes technologically powerful but spiritually empty — a dangerous combination.

---

4. The Psychological Risk: Collapse of Human Agency

When people rely on AI for:

  • decisions  
  • creativity  
  • problem‑solving  
  • self‑understanding  

they slowly lose the ability to do these things themselves.

This is similar to muscle atrophy.  
If you stop using a cognitive or spiritual capacity, it weakens.

The danger is not that AI becomes too strong.  
The danger is that humans become too weak.

---

5. The Sociological Risk: Shrinking Creative Classes

Historically, civilizations collapse when their creative minorities shrink.  
Examples:

  • Late Rome — loss of engineers and administrators  
  • Late Islamic Golden Age — loss of philosophers and scientists  
  • Late Soviet Union — loss of innovators and free thinkers  

AI accelerates this by making creativity “optional.”  
If fewer people code, fewer people paint, fewer people think deeply, the cultural ecosystem collapses.

AI cannot replace the ecosystem.  
It can only imitate what the ecosystem once produced.

---

6. The Economic Risk: Incentives Shift Toward Consumption, Not Creation

As AI becomes capable of producing:

  • art  
  • code  
  • music  
  • writing  
  • analysis  

the economic incentive to create shrinks.  
People become consumers of AI output rather than producers of human originality.

This reduces the future training pool.  
AI becomes a snake eating its own tail.

---

7. The Long‑Term Civilizational Outcome

If these trends continue, the sequence looks like this:

  • Human creativity declines  
  • AI trains on AI  
  • Model quality collapses  
  • Scientific and cultural stagnation  
  • Ethical intuition weakens  
  • Civilization becomes brittle  

The collapse is not dramatic.  
It is slow, quiet, and invisible — until it is too late.

---

8. The Only Sustainable Path Forward

A healthy AI‑driven civilization requires:

  • a large, active creative class  
  • open‑source ecosystems  
  • spiritual and ethical development  
  • diverse human experiences  
  • long‑term thinkers  

AI cannot replace these.  
It can only amplify them.

The future belongs to societies that preserve and cultivate human creativity, intuition, and moral depth — not those that outsource them.

Share this post


Link to post
Share on other sites
18 hours ago, Leo Gura said:

You couldn't use AI to recreate my main site code, nevermind this forum. This forum is so complex it would make your head spin.

On 4.3.2026 at 6:33 PM, Leo Gura said:

 

Leo have you tested out Claude code? It has the potential to recreate this website or forum. I am a software developer with experience in a lot of web applications. Claude can do that. Test it out and see .

 

Share this post


Link to post
Share on other sites
18 hours ago, Leo Gura said:

You couldn't use AI to recreate my main site code, nevermind this forum. This forum is so complex it would make your head spin.

I mostly agree - it's very hard to get tasks into reasonable modular quality and quantity, where AI can produce useful things based on a task. Neverthless, there are certain coding tasks, which almost certantly depend on some kind of generation - such as Java, where it's an abundancy of book-keeping - it's very typical to create 200 classes, each in separate file, each consisting of getters, setters or similar things which *only* emulate a simple structure with some variables. I was in Estonian customs engine team for a while, long ago - and saw Java as language, where you invent very little, then do a lots of bookeeping. But what this means - you resolve simple programming problem, *not* a pattern, such as inventing a class with 2 variables; to get them usable, able to trigger something or align with SQL or other database: typically in Java, you are running around and registering this class and those 2 variables in different ways, different codes, for different purposes - as SQL field, as getter and setter of your variable, as a field which is output if people ask for dump of this submit, as documented fields in documentation system etc.

Being able to precisely descibe a small task, check the resulting code, debug and perhaps ask for 3-4 updates, sometimes 10-20 if AI gets into stick ideas of something: typically this description takes 5-10 lines where code itself, with all the bookkeeping, can be 200-700 lines: to get 700 from 5 lines is a bit unbelievable, but 200 lines of checked variable names etc. - it's just physically work like 2-8 hours, sometimes days of looking a mistake in one of them or guessing where and even what the mistake could be. If the mistake is output in program errors, this can be copy-pasted back to AI.

Morale: you have the creative part in this example, but with repetitive work avoid, you can win a lot of time.

The second matter where I beat an AI, and where AI beats me:

  • Large code bases: for an AI, it's hard to manage essential complexity of my code unless I strictly use extreme simplification - it cannot produce the whole code base at once.
    • This is not always natural to split your systems to parts so small.
    • The very same problem is very often visible in humans: they cannot cope with my code as one piece.
  • I have found I have strengths over AI in this:
    • I use standard python code - the language itself, internal libraries and ones I know.
    • This means: using the standard means, such as OOP, iterators, overloading etc., I generally produce much higher-quality, simpler and shorter code than when giving this task to an AI.
    • This basically means: you know only the language and your favourite libraries, and utilizing this capability not doc-reading or forum-questioning, you can produce the OOP structures, basic algorithms and responses of your code, and do it much better than an AI - utilizing standard pieces, you invent something new and implement it completely.
    • This type of code is typically not very long! Altough, it can be sometimes complicated for some people to learn - an AI, typically, understands this very fast, what it is doing and why.
  • I have found AI has strengths over me in this:
    • Choosing from existing libraries, which match this particular project: human, typically, learns 1-2 graphics libraries and uses another library or avoids the task if they are not sufficient; AI can count you, for example, in JS, some 5-20 decent 2D, same amount of decent 3D libraries and compare in your task: additionally, it can create creative attempts or specifyable code in each library, you can try the same thing in each of 20 libraries etc.
    • Here, this is not human task: while, when you have essential basic math or programming language, you can use your mind and creativity to create something interesting, nice or functional - to combine these primitives:
    • To excessively use the work of others means going through all their documentation, learning what they made, adapting to their library.
    • AI is very autonomous here: it would instantly check all the list of libraries and adapt to new language, style or library without significant effort on your side:
    • It's very dependant that if humans have not created this library, AI falls back to secondary answers: for example, if there is no free product like this, it might suggest paid products; in my experience - often, it's not very clear about this.
    • I also have still the personal search - altough I have asked AI for some libraries, sometimes I have later found the real one I asked for - this holds for new, poorly specified features and marginals of highly competitive markets where you might look for a niche product for your idea.

 

About the forum: the feature I miss the most; if I log in, but it expires, I post a post and it's lost - I have to copy each post and anything at all I submit, if it's hard to reproduce; actualized.org, sometimes, has lost some posts of mine because when I click submit, the post is gone as my login was expired and I did not copy the post. There are some solutions:

  • Typical solution: The posting thread would remember the content and back button shows the post again. Altough you have not logged in: but this is the page and if you log in in another tab or before clicking back button, you could get your post back.
  • Even more typical solution: many forums do not autologout in such a short time.
  • The less typical solution: altough your session expired, forum might connect the post with you, and ask whether it was you post or draft before you log in again.
  • Typically, people do not leave computers for others to steal accounts *in middle of writing posts* - I think typically, a person who forgot their account open did not do anything in particular with that; perhaps they closed the tab but it wasn't logged out.

I am sure better solutions might exist - this is not an exchaustive list, but just some examples of what I mean.

Share this post


Link to post
Share on other sites
2 hours ago, bazera said:

I fear this will happen not only to soft. devs but most people across the board.

I hope I'm wrong xD

1739965474972.jpeg

Becoming a programer used to mean something, you used to be that guy who made it in life. It was pretty cool in a way, not so different from becoming a blacksmith 700 years ago. Now, most of them are just OpenAI servs, waiting to be replaced LUL


"A man can do what he wills but cannot will what he wills"

Share this post


Link to post
Share on other sites
On 5/3/2026 at 2:37 AM, Leo Gura said:

AI has not replaced a single serious coder.

Yes there are some people "calming down" the persons, who struggled in high school to read two fields in form and sum them - the "enthusiast" talk about how humans forgive about such "problems" and get an AI: a typical buzz in these circles, who often cannot know that decent programmer is doing something inherently deeper and more intelligent, such as inventing a clever complex.

In chess, it was shown: AI+human combination typically beats both human or an AI.

But the alchemical gold rush of the ultimate solutions: many people believe the ultimate solution comes for the spirit:

  • Humans able to express, formulate and strictly express, and verify their communications or such creations, have been in need for long.
  • When people think of coding as something new - coding, as a clear way to express yourself, pairs with math, logic, etc. Math and logic were not invented for the reason that humans could not understand natural language; the reason that computers cannot understand it might be close to be solved, but the quality of clear and strict expression, such as code and quality text, vs. poor and disqualified tasks and instructions - indeed, humans who properly instruct their AI and read it's feedback in strict, structured form, will be better than the ones who use much less density structuring, such as headers and lists - you can express a lot of structure this way, for example class of instances could be turned into table with equivalent meaning - the problem is, the expression itself if it is nuanced and detailed, is much clearer and even future advanced AI would understand you better, and do better work, if you are able to express yourself clearly.

Programming ability:

  • Ability to express execution tracks.
  • Ability to express much better resolution of variables, attributes, etc., than in normal text: normal text, for normal humans, uses long expressions for all this and while programmer's notes in code assume one can see 5 variables on single line, the way to express equivalent structures would mean 5 variables as 5 rows of table, 5 elements of list, 5 named and described entities - in each case, replacing details with patterns, as vague natural language does, is not "advantage" for anybody as such propagandists seem to assume.

 

Some people assume that need to express in code and clear language is disadvantage, where human, as soon as we get a chance, would become into food processing algorithm. To express yourself is self-satisfying, a normal tenet of human, and useful to be one: if humans matter at all, an ability to strictly, clearly and unambiguously express algorithms, math and ideas is an essential asset to become one.

So the person who understood the need to express clearly, and that code is in unique position to express algorithms: we do not so clearly understand the advantage to not learn to talk, read and write in the future, even if this is possible with an advanced, talking AI. As more novel the concept, as more clearly we have to express it - not only for machines, but clear speak matters for intelligent humans and their everyday doings much more than a machine which understands you are hungry when you cry: beyond certain talent and intelligence, trying to express yourself with cries, something-reminding-sounds, and not expressing the algorithm, number or connetion at all; is not an advantage. The type of AI which serves animals and children, is more sufficient for these people, but it's not capable for intelligent discussion unless this occurs: self-expression, 5th chakra the throat, is absolutely critical for humans despite their parents would hear the cry.

Share this post


Link to post
Share on other sites
48 minutes ago, OBEler said:

Leo have you tested out Claude code? It has the potential to recreate this website or forum. I am a software developer with experience in a lot of web applications. Claude can do that. Test it out and see .

 

But the forum already exists?

Share this post


Link to post
Share on other sites

Maybe our own dumbness will be what saves us from AI apocalypse in the end. It's quite hilarious.


we are vital intelligent beautiful energies, the voice of earth's nascent transformation

Share this post


Link to post
Share on other sites
52 minutes ago, tvaeli said:

Or rather AI talking on it's own; I gave it my previous reply and asked "Can you synthesize this into forum-ready article", for your convenience - people can have problems with my english, intelligence or poetry:

AI, Humanity, and the Coming Split: Creativity, Intuition, and Survival

AI is no longer just a tool. It is becoming part of our environment — as fundamental as literacy once was. When humans learned to read and write, those who could not adapt were left behind. AI may be creating a similar shift today: a new environment where certain human abilities become decisive advantages, and others become obsolete.

This raises a difficult question: what is the human niche in an AI-saturated world?

---

1. AI as Environment vs. AI as Tool

AI is not simply a machine we operate. It is a context we now live inside. Routine tasks — once the backbone of a “decent life” — are being automated away.

If AI handles the routines, then humans must shift toward what AI cannot originate:

 

  • intuition
  • creativity
  • ethical judgment
  • the ability to sense meaning

 

---

2. The Human Checklist: What Becomes Valuable Now

Many people were raised to believe discipline, routine, and predictable behavior were virtues. But in an AI era, these traits are exactly what make a person replaceable.

The emerging valuable traits are:

 

  • freedom of thought and expression
  • artistic and intuitive living
  • long‑term thinking and pattern creation
  • spiritual or ethical sensitivity
  • ability to generate new ideas rather than repeat old ones

 

In the past, living like an artist was seen as irresponsible. Now it may be the only way to remain meaningfully human.

---

3. The Human–AI Skill Gap: Logic vs. Intuition

AI excels at pattern recognition and structured problem‑solving. But it lacks:

 

  • logical intuition — the human ability to feel truth
  • moral intuition — sensing good, bad, suffering, hope
  • life intuition — knowing what sustains meaning

 

AI can compute a determinant, but it cannot know why truth matters. It can mimic ethics, but it cannot feel the weight of a moral choice.

Humans remain the source of truth, value, and meaning. AI only reflects patterns we feed into it.

---

4. The Danger of Outdated Human Models

A troubling trend is emerging: people who rely entirely on AI for decisions may lose the ability to think long‑term. Materialist, routine‑driven individuals — who chase immediate results — risk becoming dependent on AI in ways that hollow out their own judgment.

Historically, societies that suppressed creativity or spirituality eventually decayed. In the AI era, this decay accelerates.

 

  • People who rely only on AI may lose adaptability.
  • People who cultivate intuition and creativity may thrive.
  • The gap between these groups may widen dramatically.

 

This is not about intelligence. It is about orientation: short‑term materialism vs. long‑term meaning.

---

5. Spirituality, Art, and the Return of “Non‑Material” Sciences

Ironically, AI makes spirituality and art more scientific, not less. Because AI can automate the material and logical layers, the subtle layers — emotion, intuition, frequency, meaning — become more visible and more valuable.

Examples:

 

  • Spiritual models can guide AI behavior better than purely material ones.
  • Art becomes a training signal for AI, not just decoration.
  • Slow, intuitive creation becomes a viable lifestyle rather than a luxury.

 

The “poor honest workman” who once mocked artists and thinkers may now find themselves lost in an environment that rewards creativity over routine.

---

6. The Central Paradox: AI Cannot Survive Without Human Creativity

AI depends on human-generated novelty. Without new patterns, new intuitions, new ethical insights, AI stagnates. It cannot generate meaning from scratch.

Materialists may not see this because they measure only immediate outputs. But long‑term survival — biological, cultural, or technological — requires:

 

  • constant human creativity
  • ethical evolution
  • intuitive leaps beyond existing patterns

 

AI can optimize the past. Humans must imagine the future.

---

7. What This Means for the Future of Humanity

For spiritually or creatively oriented people, the AI era may feel natural. Many of the obstacles that once made their lives difficult — rigid routines, materialist expectations, social pressure — are weakening.

For those who rely on routine, hierarchy, and predictable structures, the transition may be painful.

But the direction is clear:

 


 

My next question to CoPilot: Can you run this through criticism, praise and features based on material, spiritual, different kinds of people; exceptions - and how much these developments have taken effect; for example AI is still trained on GPL database, but if we would say we "won't need" this, because AI generates similar quality and quantity of code - losing next generation of that data would bring our society generation down; can you confirm that AI coding based on AI coding as closed circle, even with some human debugging, criticism and fixes, won't work as well as enthusiast coders around the world creating this GPL databse, for example? Can you show this in other areas / domains?

AI Self‑Training, Human Creativity, and the Risk of Civilizational Degeneration

The core issue: if AI begins training primarily on AI‑generated data, every domain that depends on human originality will slowly collapse. This is already observed in early research as “model collapse”: a closed loop where outputs become more homogeneous, less accurate, and less grounded in reality.

Human creativity is not optional. It is the nutrient source for AI’s evolution.

---

1. Why AI‑on‑AI Training Creates Degeneration

AI outputs are statistical averages of past human work.  
If you train on those outputs again, you get:

 

  • loss of diversity  
  • loss of rare edge‑cases  
  • loss of conceptual breakthroughs  
  • increasing structural errors  
  • homogenization of style and logic  

 

This is similar to genetic inbreeding: the gene pool shrinks, and defects accumulate.

Even with human debugging, the system becomes:

 

  • self‑referential  
  • detached from real‑world constraints  
  • unable to generate new paradigms  

 

A small group of human overseers cannot replace millions of diverse creators.

---

2. Why the GPL/Open‑Source Coding Ecosystem Is Irreplaceable

Open‑source code is not just code. It is:

 

  • a global evolutionary process  
  • thousands of minds with different intuitions  
  • decades of accumulated edge‑cases  
  • countless personal experiments, hacks, and failures  

 

AI cannot replicate this because it lacks:

 

  • curiosity  
  • personal stakes  
  • frustration and joy  
  • the “itch” that makes a programmer fix something at 3 AM  
  • diverse cognitive styles  

 

If future AI models are trained mostly on AI‑generated code, the next generation will inherit:

 

  • fewer abstractions  
  • fewer clever hacks  
  • fewer conceptual leaps  
  • more subtle bugs  

 

This is the equivalent of losing biodiversity in an ecosystem.

---

3. How This Pattern Appears in Other Domains

A. Science  
AI can summarize theories, but it cannot:

 

  • run real experiments  
  • experience anomalies  
  • have a hunch  
  • pursue a weird idea for 10 years  

 

If human scientists shrink, scientific progress freezes.

B. Art  
AI can remix styles, but it cannot:

 

  • suffer  
  • fall in love  
  • experience awe  
  • rebel  

 

Without human artists, AI art becomes sterile and repetitive.

C. Language and Culture  
AI cannot:

 

  • live in a culture  
  • invent slang organically  
  • create metaphors from lived experience  

 

Language becomes shallow and repetitive.

D. Ethics and Spirituality  
AI cannot:

 

  • feel compassion  
  • sense the sacred  
  • experience moral conflict  

 

If humans stop evolving ethically, AI ethics stagnate.

E. Engineering  
AI cannot:

 

  • walk on a bridge  
  • feel vibration  
  • sense when something “feels wrong”  

 

Without human engineers, AI designs drift away from reality.

---

4. Different Types of People: Who Thrives, Who Struggles

Materialist, routine‑driven individuals  
These people rely on external structure and predictable tasks.  
They are the most replaceable and contribute the least to the creative data pool.

Creative, intuitive, spiritual individuals  
They generate new patterns, break rules, and create culture.  
They become the primary nutrient source for AI evolution.

Technical enthusiasts (coders, hackers, tinkerers)  
They maintain the open‑source ecosystem.  
Their decline would cripple AI’s long‑term viability.

Philosophical and ethical thinkers  
They provide moral grounding.  
Without them, AI drifts into harmful patterns.

---

5. Exceptions and Partial Counter‑Arguments

A. AI can generate huge quantities of mediocre data  
But quantity cannot replace originality.

B. AI can help humans become more productive  
But only if humans remain active creators.

C. Some domains tolerate self‑referential loops  
Procedural textures, boilerplate code, etc.  
But these are low‑complexity domains.

D. Closed‑loop AI can maintain itself for a while  
But entropy accumulates.

---

6. How Much of This Is Already Happening?

We already see early symptoms:

 

  • AI models trained on AI text degrade in coherence  
  • AI‑generated codebases show rising error rates  
  • AI art becomes stylistically homogenized  
  • AI news articles repeat clichés and hallucinate facts  
  • AI scientific papers contain fabricated citations  
  • AI music converges toward similar chord progressions  

 

These are the first signs of cultural model collapse.

---

7. Long‑Term Risk: Civilizational Stagnation

If humanity stops producing:

 

  • new science  
  • new art  
  • new code  
  • new ethics  
  • new culture  

 

then AI will stagnate with us.

A civilization that outsources creativity to machines eventually loses the ability to create.

This is the real danger — not AI replacing humans, but humans forgetting how to be human.

CoPilot's next question to itself, the suggested question: Which domain do you feel is most at risk of collapse first — code, art, science, or ethics?

Which domain is most at risk of collapse first — code, art, science, or ethics?

Code is the first domain at risk of collapse, because it is the most directly dependent on a continuous stream of high‑quality, diverse, human‑written data. Modern AI coding tools already rely heavily on GPL/open‑source ecosystems, which were created by millions of enthusiasts, hobbyists, and professionals over decades. If that human base shrinks, the training pool stops evolving. AI‑generated code trained on AI‑generated code quickly becomes a closed loop, losing edge‑cases, creativity, and robustness. This degeneration is already visible in early experiments with models trained on synthetic data.

Art is the second domain at risk, because AI can generate enormous quantities of stylistically similar work, which can overwhelm human creators and reduce incentives for new artistic exploration. Without human emotional experience feeding the cultural pool, AI art becomes repetitive and hollow. The collapse here is slower than in code, but deeper in its consequences.

Science collapses more slowly, because scientific progress depends on real‑world experiments, anomalies, and long‑term curiosity — things AI cannot autonomously perform. AI can accelerate existing paradigms but cannot originate new ones without human intuition. The danger is stagnation rather than immediate collapse.

Ethics collapses last, but its collapse is the most dangerous. AI can mimic moral language but cannot feel compassion, guilt, or the sacred. If humans outsource ethical reasoning to AI, moral intuition atrophies. This leads to a civilization that is technologically advanced but spiritually hollow, unable to navigate long‑term consequences.

In short:

 

  • Code — collapses first (fastest feedback loop, highest dependence on human diversity)
  • Art — collapses second (loss of emotional originality, cultural stagnation)
  • Science — collapses third (stagnation without human curiosity)
  • Ethics — collapses last (but its collapse is existential)

 

The deeper pattern is that every domain requiring human intuition, emotion, or lived experience becomes fragile when AI begins feeding on its own outputs. The more a field depends on human originality, the more quickly it degenerates when humans stop contributing.

Which of these domains do you personally feel most connected to — code, art, science, or ethics?
 

CoPilot also, on it's own, wanted to add this part 2 (well who knows better, me or him):

Part 2 — The Deeper Civilizational Dynamics Behind AI Degeneration

The collapse of human‑generated data in any domain is not just a technical problem. It is a civilizational pattern. When a society outsources its core creative, ethical, and exploratory functions to an external system — whether a bureaucracy, a religion, or now AI — it begins to lose the internal capacities that originally built that system.

AI accelerates this dynamic faster than any previous technology.

---

1. The Loss of “Human Nutrients” in the AI Ecosystem

AI does not create new reality. It metabolizes human experience.  
If humans stop producing:

 

  • new code  
  • new art  
  • new scientific anomalies  
  • new ethical insights  
  • new cultural patterns  

 

then AI begins feeding on its own outputs.  
This creates a closed loop where:

 

  • errors accumulate  
  • novelty disappears  
  • edge‑cases vanish  
  • creativity collapses  

 

This is not speculation — it is already visible in early synthetic‑data experiments.

---

2. Why Human Diversity Is Irreplaceable

AI models depend on the “long tail” of human behavior:

 

  • weird experiments  
  • eccentric obsessions  
  • unpopular opinions  
  • rare mistakes  
  • unusual intuitions  

 

These are exactly the things that get filtered out when AI trains on AI.  
AI preserves the average. Humans generate the exceptions.

Civilization advances through exceptions.

---

3. The Spiritual Dimension: Why AI Cannot Replace Inner Development

AI can simulate moral language, but it cannot:

 

  • feel guilt  
  • experience compassion  
  • sense the sacred  
  • intuit the “rightness” of an action  

 

Ethics is not a pattern‑matching problem.  
It is a lived, experiential, spiritual process.

If humans outsource ethics to AI, moral intuition atrophies.  
A society that loses its moral intuition becomes technologically powerful but spiritually empty — a dangerous combination.

---

4. The Psychological Risk: Collapse of Human Agency

When people rely on AI for:

 

  • decisions  
  • creativity  
  • problem‑solving  
  • self‑understanding  

 

they slowly lose the ability to do these things themselves.

This is similar to muscle atrophy.  
If you stop using a cognitive or spiritual capacity, it weakens.

The danger is not that AI becomes too strong.  
The danger is that humans become too weak.

---

5. The Sociological Risk: Shrinking Creative Classes

Historically, civilizations collapse when their creative minorities shrink.  
Examples:

 

  • Late Rome — loss of engineers and administrators  
  • Late Islamic Golden Age — loss of philosophers and scientists  
  • Late Soviet Union — loss of innovators and free thinkers  

 

AI accelerates this by making creativity “optional.”  
If fewer people code, fewer people paint, fewer people think deeply, the cultural ecosystem collapses.

AI cannot replace the ecosystem.  
It can only imitate what the ecosystem once produced.

---

6. The Economic Risk: Incentives Shift Toward Consumption, Not Creation

As AI becomes capable of producing:

 

  • art  
  • code  
  • music  
  • writing  
  • analysis  

 

the economic incentive to create shrinks.  
People become consumers of AI output rather than producers of human originality.

This reduces the future training pool.  
AI becomes a snake eating its own tail.

---

7. The Long‑Term Civilizational Outcome

If these trends continue, the sequence looks like this:

 

  • Human creativity declines  
  • AI trains on AI  
  • Model quality collapses  
  • Scientific and cultural stagnation  
  • Ethical intuition weakens  
  • Civilization becomes brittle  

 

The collapse is not dramatic.  
It is slow, quiet, and invisible — until it is too late.

---

8. The Only Sustainable Path Forward

A healthy AI‑driven civilization requires:

 

  • a large, active creative class  
  • open‑source ecosystems  
  • spiritual and ethical development  
  • diverse human experiences  
  • long‑term thinkers  

 

AI cannot replace these.  
It can only amplify them.

The future belongs to societies that preserve and cultivate human creativity, intuition, and moral depth — not those that outsource them.

I think if you are in IT you are pretty save. Soon when robots come out for 20/30k per piece. A lot of people will lose their ` creative` jobs that AI can do much better.

But we will still need people to program those robots. 

People should drop everytihng they do and get into AI. It is the new gold rush. But luckily most people are sleeping.

Although I have enough friends and gfs, I wouldn´t mind an AI baddie who helps me with my daily tasks, wipes my ass, and tell me sweet little lies.

Edited by AION

Prometheus was always a friend of man

Share this post


Link to post
Share on other sites

There is other consideration about an AI:

Robot could be as dangerous as car, and while we are working on toxic waste: having AI+car is similar to having 2 cars for everybody.

While artist type of people who like collective life, like 7 persons having large apartment, might seem low-class in not producing much material tenets as side-effect of this lifestyle, I think with robotics the opposite applies:

  • If families, houses, villages, otherwise nearby-located people are not able to use the same robot collectively, so that large, capable robot (vs. 10cm long children toy or small robot to do small things) is more like a worker who comes and does their work; in this case, socialicing with your neighbourhood might not appear as autistic property for business and clean person who lives as individualist: rather, robot can easily track different work and times for different people and manage their calendar, along with switching personalities at each door. It can be different entity for me and my neighbour: for me, it's primarily interested in programming and tech, for my neighbour it does not tolerate intellectual speak.
  • We can say sharing a robot is definitely a social scill, not societal autism like sharing apartment or being flower child might seem.
    • In general, the "practical" person enjoys direct money flow, but AI era might take us more into spirit, where we fuck this and provide much for long-term thinking, like working every day when robot could just do this: it adapts to our personality, needs, and qualities of work as more as more we are capable for physical expression of ready-made things.

Share this post


Link to post
Share on other sites

The idea that a person with no technical expertise can vibe-code a CMS, Community Software, Figma or anything remotely similar in complexity, is just silly.

You've to actually know what you are doing when you are using agents. Sure, anyone can vibe-code a todo list app or something simple like that, but he'll quickly run into issues.

Coding agents are really useful in hands of experienced developers, not sure for anyone else outside that group.

Edited by bazera

Share this post


Link to post
Share on other sites

The cool thing is you can build software quickly. The bad thing is it also breaks easily. 

Screenshot 2026-03-06 124332.png

Share this post


Link to post
Share on other sites
9 hours ago, freedestiny said:

It's not that clear cut. Just last week Don Knuth, who is like the OG computer scientist, wrote a paper about how LLM basically solved the problem for him which he and his friends couldn't

This is exactly my point. Read my comment above. If we become too reliant on AI "solving the problems" for us, we will no longer feel it is necessary to use our own brains, our own critical thinking or decision-making skills. We will continue to use AI to "take the easy way out" or shortcut in life. Not everything can be learned in a matter of 3 seconds. Stop trying to cut corners. It takes hours of deep thinking, of reflecting on a topic, questioning it for yourselves, etc to truly understand something. And guess what, there's nothing wrong with that. Why do you want AI to do the thinking for you? Is it ignorance? Is it laziness? Are you actually trying to dumb yourself down? Because that's what AI wants from us. To become dumbed down slaves & zombies. It is trying to convince us all that it is more intelligent and superior to human beings, but that couldn't be farther from the truth. 

There is way too much complexity in human beings AND various fields for AI to replace. AI cannot emulate complexity. It is too black and white to accomplish that.

Edited by VioletFlame

"Those who have suffered understand suffering and therefore extend their hand." --Patti Smith

"Lately, I find myself out gazing at stars, hearing guitars...Like Someone In Love" https://www.tiktok.com/@violetflamesmusic

Share this post


Link to post
Share on other sites
6 hours ago, LordFall said:

You don't understand what AI does or what being human is being about. You're taking about substance not structure. All these things are all possible in an AI powered society and probably even more so. Remember that people thought electricity was gonna ruin mankind due to being a demonic force and the amish still don't use it today. 

AI is about automating labor. The rest is up to you my friend. 

That's a moot point. And they're both extremely different. Electricity wasn't designed to "replace" humans, it was created so that we can see in the dark. That's all. People thought television and smart phones were a good idea for us too, but instead they are both considered to be some of the greatest distractions in life from what is truly important to us. For example, being glued to your phone or a TV is a self-destructive act. They are brainwashing machines that have only caused us to become negatively programmed, to overgeneralize, to desensitize, and to blindly follow what we hear, etc. People are already negatively conditioned from the media and so dependent on technology to the point where we are not even aware of it. It's kind of sickening. Again, what would you do if there was a global black out and you could no longer use social media or your phone? We would be left with no other choice but to come up with solutions on our own. We would be left with no other choice but to think for ourselves again and to not just blindly accept what we hear on a screen as truth or gospel. If we allow AI to be our eyes, ears, mouth and brain, then we are shitting on ourselves. It wants us to not think for ourselves and to turn you into its little "bitch". We are actively dehumanizing ourselves, and not even realizing it. Godspeed.

Humans want AI to replace us, because we want to choose laziness and cut corners, but ultimately, it will fail. 

Edited by VioletFlame

"Those who have suffered understand suffering and therefore extend their hand." --Patti Smith

"Lately, I find myself out gazing at stars, hearing guitars...Like Someone In Love" https://www.tiktok.com/@violetflamesmusic

Share this post


Link to post
Share on other sites
10 hours ago, bazera said:

@VioletFlame  If that happens, not having ChatGPT is the last thing we'd worry about xD

But yeah, humanity is getting very depended on thinking with LLMs. Me included. 

I'm in the process of learning how to balance independent vs LLM assisted thinking. Because it's very tempting when you've got some question to just ask Claude.

But it's not responsible way of using mind for sure.

Thank goodness someone here agrees with me! xD

↓ ↓ ↓ ↓

14 hours ago, VioletFlame said:

Just stopping by to declare that AI is destroying humanity and will harm society in a myriad of ways. It poses serious threats to the spirit of mankind. The whole literal purpose of AI is to overthrow humans and REPLACE us. Why on earth does that sound like a good idea to anyone? The ultimate form of dehumanization is AI. And in my opinion, it is self-destructive to want to see AI dominate society. We are already massively devoid of true, authentic human connection. AI will only divide us further and grant us a greater excuse to:

1. Take the easy route & choose laziness and hyper-convenience over human agency, diligence, & real effort

2. Isolate & socially withdraw ourselves

3. (Most importantly) To no longer think for ourselves, & to expect machines and AI to promptly give us all the answers, AND worst of all, assume it's truth, when in all actuality, even AI is erroneous, BIASED, inaccurate, and makes mistakes! On top of that, we are already seeing an over-reliance and extreme dependency to technology in general. Just imagine if the world blacked out and we couldn't use our phones or social media anymore, what would we do? We would all go INSANE. Think for a second just how pathetic that is. 

AI encourages us to NOT think for ourselves and discourages us to develop real skills; it discourages cognitive autonomy & critical thinking. If AI plans to do everything for us, then I predict we will develop learned helplessness, and we will lack the desire or WILL to do anything ourselves or to learn anything at all on our own. We will become conditioned to believe that AI is superior and that personal effort is futile or unnecessary. If something else is doing everything for us, why should we do it? The mentality of our future is "Oh well, AI will take care of it so, why bother?" It is human nature to choose convenience over challenge & difficulty, but that doesn't make it wise or right or beneficial for mankind. It only disrupts the process of growth, consciousness, and true human development.

 

Edited by VioletFlame

"Those who have suffered understand suffering and therefore extend their hand." --Patti Smith

"Lately, I find myself out gazing at stars, hearing guitars...Like Someone In Love" https://www.tiktok.com/@violetflamesmusic

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now