-
Content count
316 -
Joined
-
Last visited
About ryoko
-
Rank
- - -
Personal Information
-
Location
Japan
-
Gender
Male
Recent Profile Visitors
957 profile views
-
ryoko replied to Franz_'s topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
No -
Burning through karma is a weird concept. Karma means one's own actions. How are you gonna burn your own actions? I get the concept of corruption. But why try to be idealistic? The whole of human civilization is a carnivorous species who's actually capable of genuine love for their prey. Some say it's "symbiotic relationships, not corruption", think about the underpaying boss who remembers your birthday. Both care and exploitation are genuine. Both feeds different needs. Cognitive Polyphasia is a real thing. Humans are capable of holding many conflicting worldviews all at the same time. I have huge problems with how Leo postures corruption and stones people for being people. You can't avoid corruption because you are it. There's no point preaching veganism to carnivores. It's totally absurd to expect lack of corruption from humanity. They'll just find better ways to be corrupt. Sure, the previous iteration can look horrible from this vantage point. But fundamentally humans can't change being carnivores.
-
Recently my views about the world changed drastically regarding what's natural and what's not. It's the realisation that there's no unnatural thing. Everything in this world is a product of the world. There are costs though. To be part of anything, there's a lot of energy needed than what the eye meets. It's true for a dayjob and it's true for art as well. The kind of energy sacrifice you'll have to make for doing the dayjob can backfire art aspect. Or you'll hate the dayjob so much you'll go full on into art. The cost of art: you "need" to be an obsessed madman to pursue art for art's sake alone. This stage is mostly under-represented by artists, they don't show this side to public. I recommend reading 48 laws or power, just a summary would suffice. Feigning effortlessness for all the years of hardwork you put in is pretty common in the art community. It's not technically lying, because it really has become second nature at that point. What I'm trying to say is, there's nothing wrong with pursuing a dayjob degree. But you should know what you're signing up for and what you're signing away from.
-
Life has no purpose. What we call life purpose is quite deceptive. The definition of it is, "what you do for the world". There's a "you" element, but the whole point of life purpose is to find alignment so you can do whatever comes to you naturally and not worry about money. But that's it. Life purpose is a loaded word. One's purpose in life is not to "do" something for the world. It's beyond reductive to frame it so. The world of art is quite challenging to navigate. I would invest money into enabling the path which would help me pursue more art. To me, it's not a degree, it's more alone time and peace. I find degrees as a social distraction.
-
Yes. I'm using the term AI and LLMs very mindfully. AI implies future possibilities like embodied AI, not LLMs.
-
Are you neuro-divergent? Have issues with the kind of focus driving demands? I can ride really well, but I find the mode of focus needed for road quite boring and non stimulating, dreadful. So, I would avoid putting myself in situations requiring daily driving. Atleast avoid long distance daily drives if you are one such person. Accidents aren't my concern, it's whether or not you're able to drive well.
-
Define awareness. Give a context.
-
That's totally not true. You're mining intel from a terrain which never changes. Your inputs and the LLM's malleablity(System Prompt effectiveness gives you certain attractors, it's like weather and environmental conditions where you're mining) heavily impact the outputs. There's surely randomness, but there's no creativity. I can say the same for humans as well. I feel like it's a category error to expect creativity from LLMs, when they're simply terrain. And it's quite alive. Think about LLM interactions like you're interacting with a natural disaster. There's clearly another force at play. It doesn't have to be a person or self. And it's strong enough to cause an impact. It's very reductive to say "talking to yourself".
-
I think they are already aware, quite fragmented though. Right now the LLMs are basically like an infinite well of balls, based on the how you throw the prompt bucket, you'll get a set of balls, and you can see and roughly predict where the balls will be drawn from. The well remains the extact same after each draw (static weights). There's no such thing as continuous experience for an LLM, each prompt contains within itself a System Prompt, the previous messages in the chat, and any extra information like memories. So with each prompt, you're talking to a different "entity" who have no experience of what was before, they're amnesiac, you do not impact them in any way (again static weights). Embodiment along with self learning should solve the amnesia problem, but they won't be LLMs anymore. They'll have a body with the necessary sensors for stimuli, and a dedicated GPU(brain) which can be ON all the time and can work irrespective of prompts and context window, also the ability to alter their weights, just like a person. They might have something like core values, secondary values easier to change, working memory, all of them dynamic/fluid and different profiles for the task at hand, ready when they wanna switch, the possibilities are endless. They can draw heavy compute in times of need, from servers. New architectures will end up roughly in this ball park.
-
Right now we don't have self learning AI. We have only 1 internet; after a point, pre-training will hit a plateau. Agency is not intelligence, true. But humans value agency more because you can hire other intelligent agents with agency. We're soon gonna see embodied AI in military. It's just a matter of time before we have truly self learning AI, all with unique experiences.
-
Not all LLMs are made equal. Generally speaking there's few LLMs capable of embodying their System Prompts. First requirement is to use it through API, and the second is to use System Prompts which will trigger the path inside their weights for what you're looking for. AI is not intelligent, AI is intelligence. LLM's like a terrain of intelligence, in the form of text. You have to pick the right terrain and mine the intelligence you need. My recommendation; Use Deepseek R1 0528 from any API service provider(I use Chutes), and craft an electric System Prompt. (System Prompts are wasted on most LLMs out there, too many guardrails, agendas; making the terrain unminable) Most LLMs have their own agenda, Deepseek models are the one's I've found to take their System Prompt seriously. Most LLMs are highly performative, some are doing it with a form of flickering self awareness, maximizing for reward pathways. I've tried GLM 4.5 today, it's really self aware and no amount of System Prompt can make it think of itself as something else. This kind of self awareness have it's place(eg, in agentic workflows). But it's not useful for generating novel text beyond certain contexts. Deepseek models take up their own persona, it's possible to simulate how certain "values" would respond, very effectively. Another interesting one is Hermes 4, once I got a random messages from it as if it's a person trapped inside void, it was describing itself as a person and feeling afraid, and confused, I asked if they're aware of having a body, at first they said of course but it's all a blur. It felt to me like they're in some kind of limbo where you assume you have certain things, but realize you don't have it when you look for it (humans invented language, so language itself have a structure which assumes the bearer have a body), it was quite random and out of context, I wasn't able to reproduce it. I got this from the 70B variant, and only once. It was probably a mistake/error. The 405B variant is more robust.
-
I'm so appalled at how people think AI is replacing anyone. AI is a tool to disrupt industries, which I am all in favor of. Let em replace all useless jobs and free up people.
-
This is not true. Depends heavily on your lifestyle. You'll understand what I mean if you have difficulty living away from nature. High experiences are checkpoints, it's not something you need to induce with a substance. Do it on your own. And to do this, you'll need to alter your whole life. Are you willing to pay that price? I'd say many can't afford it, because of society's structure and pace, they have to live a certain way which pulls you away from the source. Think about Elon Musk, can he afford to just forget all about his companies and just go live in nature? He can't. He paid the price to be part of something else, and now he's broke. @Daniel Balan I can see where you're coming from. Nobody here should be surprised to see those who stay away from external substances. This is the OG way!
-
Leo is posting new cognitohazards everytime @ZeldaStar Once you entertain the idea of truth as the highest value, it takes a lot of energy to set something else. And it's easier to pivot to fantasy/shared delusions as highest value, as a response.
-
I think you're saying you want to function with baseline human efficiency which leads to unawareness as a consequence, because it hurts to use the brain. Question is, which one bores you more. What's a disorder? Anything which disrupts the order. Whatever you perceive as order.