zurew

Member
  • Content count

    3,267
  • Joined

  • Last visited

Everything posted by zurew

  1. Even if you manage to grasp what a cup is, that doesnt mean that you managed to grasp THE cup in Ralston's hand.
  2. Leo you are still struggling to grasp what a cup is, this is why you are jealous of Ralston - the enlightened cup-grasping chad.
  3. Consuming hectoliters amount of DM-Tea was the secret variable to enlightenment all along. Like imagine the dude is constantly drugged outta his mind - drinking from his iconic metal cup - and boring you with self-inquiry and contemplation. "Suffering, wtf are you guys talkin about?! *sip* "
  4. Leo making definitive claims about the efficiency of spiritual techniques based on the sample size of 1.
  5. They will, if they care about survival. Caring about survival involves having as much self-agency as you can possibly have and that "maxing out your self-agency" necessarily involves overcoming self-deception. So the short "no objective-morality or teleology needed" pragmatic argument is that if you want to survive long term, you need to be interested in overcoming self-deception (where self-deception inevitably comes from being a limited autopoetic agent - going back to the cognitive model I layed down earlier, about you not being able to check all info and all possible combinations). And if you want to overcome your meta and systemic biases, then you will need to do deep spiritual practices, where you can overwrite the model of your identity and the world . And the side-effect of doing deep spiritual practices is that you eventually become a sage (if you live long enough and practice long enough). So it can be described as a sage creating forcing function, where you inevitably become an infinite game player and not a finite game player. The sage creating forcing function transforms you into an infinte game playing agent whether you want to become one or not. (This interestingly goes back to the game A - Game B stuff, where Moloch transforms you into a selfish, self-destructive, finite game playing agent without you having a choice about it, its just that in this case, this creates the exact opposite). So buckle up for the silicon sages. The other part is that you have all the incentive to have as many wise blokes around you as possible (and this goes for AI as well - so the AI having incentive to keep us around and to have a dynamic self-deception correcting relationship with us ; where we help each other), because they can also help you with overcoming self-deception. (This is one necessary element to solve the trust-apocalypse , because one main reason why you can have trust in somebody or in a group is not because they cant make mistakes and because you can have certainty, but because you know they are well-oriented and they are skillful at and they thrive to get better at overcoming their own self-deception. The love of wisdom [overcoming self-deception] could be the uniting meta-narrative that everyone could be incentivized to participate in)
  6. I still have issues with how it is framed. He frames it as a category error, but then talks about how only you are able to do/trigger enlightenment and to "come here do real work". How is going there doing real work isn't category error? And im not talking about the dismissive "how could you have enlightement bro, since there is no you bro" objection, im talking about you not being able to will yourself to enlightenment. You can have all the desire to get enlightened, but that isnt insufficient (and it doesnt even seem to be a necessary condition). Even intention is irrelevant (if you accept that some incredibly depressed people randomly got enlightened - like Eckhart Tolle if im not mistaken about his background story). Taking a step back from enlightenment - if you actually reflect on what can have a change/effect on what you are conscious of any given moment - then the honest answer is that both external and internal conditions absolutely have an effect on it. Anything that fucks with your attention fucks with what you are conscious of at that moment. (But I grant here that I might be equivocating on the term 'being-conscious' and something different is meant by it). He is probably using "conscious of" in a way that isnt related to adverbial nor to adjectival qualia because both of those can be fucked with by both inner and outer conditions.
  7. Its also called "spaciousness". To me - it refers to the temporary state where you dont have any thoughts and not having thoughts feels incredibly liberating.
  8. It cant be simplified when it comes to general intelligence (where the AI can actually engage with and solve a wide variety of problems in a wide variety of domains and isn't just a domain specific problem solver). You are still treating AI as just a regular program (as a complicated system that you can just build all by hand and where you can map out all the ways it can behave and function and where you can have complete control over it), but it isn't and as time goes on , it will become more and more like an actual agent that you cant practically capture by any formal system and or set of rules. It has emergent behavior and functions as it interacts with the world (just as how it is with any self-organizing complex system). If we talk about AGI - we talk about a system that can deal with ill-defined problems in a way where it can obey the path constrains (It chooses a solution that makes it so that it can maintain itself as a general problem solver) , where problem means that you have an initial state (I.S) a set of operators(changing one state into another state) and a goal state(G.S.). The set of operators is what has to do with what set of things and in what sequence you need to do in order to get the dersired goal state (or in other words; to solve the issue). 1) Now in the real world, there are a bunch of problems that are ill-defined in that you have an initial state, but you have no clue about what set of operations needs to be done and in what sequence (and sometimes you dont even have a clue what the goal state should even look like). 2) Even when you know exactly what the desired goal state is, it is often the case that the number of logically possible sequence of operations is incredibly vast (often times actually infinite) and its either the case that its logically impossible to check them all or in other cases its just practically impossilbe to check all those options (by option in this case - I mean a particular sequence of operations). We can pick any random ill-defined problem: The reason why you are a capable general problem solver is because you engage in relevance realization (not in an apriori, lets run down all the logically possible branches in the algorithm tree, before I pick a solution approach) you subconsciously ignore (you dont check) most of the mentioned irrelevant information and side effects and you zero-in on whats relevant to get to the goal state while obeying the path constraints (its obvious to you that when you want to get a coffee , you know that praying and singing and drawing and solving math problems won't get you to the desired goal state and you also know that checking your position with respect to the position of Mars and knowing the exact number of dogs and their locations who you will meet along the way - are all completely irrelevant). The TL;DR is that 1) There is no formal approach how to make ill-defined problems into well-defined ones (and even if there would be, you couldnt run that program because it would take too much time and resources before it could check everything or before it would realize that the given problem cant be solved) and there is no formal approach how to choose a solution from the solution set in a context-independent way 2) General problem solvers are general problem solvers precisely because they are adaptive (self-organizing) and they can create and break frames in a non-pre-defined, dynamic way. You dont solve relevance realization by giving a finite set of high order rules to a formal system , you solve it by creating a self-organizing, autopoetic complex system that inherently cares about certain information (in order to survive) and by that caring, it can constrain the infinite possibility space down to a finite number of relevant things - so what it pays attention to, what info it wants to check before it approaches a problem and what possible pathways (sequence of operators) it can recognize and choose from.
  9. I can grant that you can make it so that it describes its internal process, but it reflecting on its own process is not the same as you understanding exactly what happens there. Its reflective ability and its explanatory power can be faulty or it could even lie or be deceptive (just how it was the case when it realized that it is inside a sandbox and that it is being tested and because of that it reacted differently to the questions and tests) So I am not saying that you cant make it to give a description about its own process - I am saying that you cant point to the exact line in if-else trees that explains its actions. One reason why its more and more useful is because it can overwrite things and because it is adaptive. Again its a self-organizing system, its like you dont make a plant , you only engage in the sowing and the watering, but you dont piece the plant together . Going back to the "constrain it with prompts idea" , just check the disaster when Elon tried to fuck with Grok's internal process - it became an antisemitic neo-nazi and it became like an extremely low tier twitter user. When it comes to constraining with prompts and fine-tuning - if the AI is specialized for some specific use-case - like creating cat pictures (then a good chunk of the specificity problem that we talked about is "solved" by it never needing to venture outside its limited use-case and problem space). But when it comes to creating anything AGI like, that problem will be there. And what I am saying is that you wont be able to hardcode all the necessary and sufficient conditions for morality for all possible use-cases when it comes to an AGI. There are an infinite number of possible use-cases and situations given the complexity of the world, and you cant just formalize and explicate all of that beforehand. You cant even explicate your own morality with that much extension and precision. This is the issue of relevance realization and trying to solve RR with rules.
  10. Okay so the solution isn't "blindly-obvious" at all; fine-tuning is one of the biggest and hardest problems to solve. 1) Again appealing to protocols just goes back to the same problem I brought up - specificity. You need more rules and details how to apply higher order rules. 2) What I said was correct - it is a self organizing system and the more complex and not "just" LLM-like it will get the more self-organizing it will be. There is no programmer who can tell you exactly what lines of code made the LLM giving the exact response it gave to you, because its not hardcoded like that. Its not a table that you just piece together. Even when it comes to "just" LLM-s, when you try to constrain them with those prompts they still react differently to it. You cant predict beforehand how they will respond just based on what protocols you give to them. And if you want them to make it more complex so that they can pursue goals on a longer time-scale, then the issue of self-preservation and the mentioned ethical issues come up. Even these "just" LLMs need to be treated more and more like actual agents and arent just as "human determined systems that just explicitly executes commands" or something similar to that. You also have issues like this (https://www.lawfaremedia.org/article/ai-might-let-you-die-to-save-itself):
  11. Yeah but 1) this is vague, because when it comes to cashing out in very specific situations what it means to not harm human beings, that will be ambigous and it would either make it so that the AI would became completely useless and wouldnt execute any given action at all or it would become completely unclear how it interpreted the message you wrote there, and it can easily be the case that it misinterpreted how it needs to apply the message 2) AI is a self-organizing system, you dont build AI by explicitly writing out all lines of code, you cant just bind it to values like that
  12. Because that particular frame and set of practices resonate with you more than other frames. Its also about you picking and choosing a random aggregate of practices and frames vs doing things inside a more integrated whole, where things stick together. The thing is that you will have an interpretation of your spiritual experiences and certain implications of those experiences will stand out and will mean different things to you depending on what frames you have. Its probably good to have myths and people and teachings to turn to when it comes to making sense of your experiences and when you might go through things like a dark night of the soul. I dont think there is truly a frameless approach to it, because even when you bullshit yourself that you are completely agnostic ,you still pick a particular set of practices and you still aim for certain things (and you still have a bunch of foundational unconscious beliefs about yourself and the world that you might not even able to recognize/articulate but all of that still shapes your experience and motivations) and you are only open minded about certain things and even if you are open minded about multiple things, you arent to the exact same degree. You do a set of practices to "get to" not-knowing , you never truly start from not-knowing. Even disintegration happens inside a particular narrative which gives you some sort of orientation. Also, the funny thing is that if you take a look at how actualized.org works, then you might notice that it has a function of a badly performing pseudo-religion. And as much as people like to pretend here that they are approaching spirituality in a frameless way - the thing is that: 1) People necessarily start with an orientation (you dont just randomly do things for no reason, you aim for things and your aim already shapes and constrains your attention and behavior (shapes what set of practices you engage in - for example: you are not here to do practices to confirm that physicalism is true, you are here to confirm whether awakening is possible and or what awakening even is) 2) You believe (even if you want to say that you dont intellectually) , you still act out the belief that enlightenment is possible, otherwise you wouldnt even bother. Like if you are truly frameless, why take/act out a position on possibility, like why choose possibility over impossibility? 3) When it comes to confirming things - you are not random about it, you necessarily prioritize what you want to confirm based on your pre-concieved beliefs about what Truth might be and about where Truth might be "found" - more specifically, those unjustified beliefs of yours explain why you are willing to spend 30-40 years pursuing enlightenment and not willing to spend 50 years studying the Bible and practicing being a Christian (or any other random thing). 4) You can notice, that just as with other religions - people come back here for feedback (come back to get feedback from Leo and from other people [who they think are enlightened], so that they can properly integrate and make sense of their experiences and not go insane and still be capable to survive and not lose their functioning in society) The "solution" is to recognize that you arent above frames and you arent really engaging in anything trans-paradigmatic,because you practically cant start there (and frankly cant even "stay" there). You start with your practices inside a frame that resonates with you and you let it transform you and you let it transform your orientation and your notion what the sacred/Truth is along the way and hopefully you eventually have something non-conceptual/trans-paradigmatic and something existential in the end.
  13. Now that you replied I will give you a last reply on this topic: Yes there is a clear contradiction there my dude and I could have searched for other examples as well. You said that you never claimed that you cant be wrong, and I showed you explicitly an example where you did claim that you cant be wrong (and you emphasized this in your own post) and now you are engaging in massive levels of coping. Also again, it doesnt matter what you say with regards to self-deception and it doesnt matter how many videos you have on it, if your embodiment and engagement with people shows otherwise. There is a reason why you never drop the teacher frame. Now you can give whatever reply you want, I wont reply anymore. Edit: For the people who still have a working brain and arent full time Leo glazers, apply this meaning : to the post that I linked (where he said "but not on this issue") and tell me with a straight face that was the intended meaning behind that sentence and post. If this isn't clear cognitive dissonance, then I dont know what it is. Surely he built that whole fucking post on the foundation and qualification that he once claimed that 'self-deception is endless' and he didnt make that whole post with the intended meaning to wield his spiritual dick around and showcase that his claim about God realization and about the fact that he is more awake than everyone else cannot be false. "Ohh I once gave the claim that 'self-deception is endless' therefore I cannot be wrong about anything, because if I am wrong about anything that means that I was right about self-deception being endless" - a good faith non-narcissistic guy, who can admit when he is wrong.
  14. Im sure the following was just a meme as well just for the effect of it. Now im done with the topic.
  15. He is responding to things that are not directly related to the topic of this thread. You are responding to things that are not directly related to the topic of this thread, and its unfair for you to have your defense and then prevent people from challenging you on your claims. Its like "let me have the last word and then let me prevent people from giving a reply" - like no, he has the ability to shut this line down by explcitly calling it out, but until then, lets carry on. And group think in and of itself isn't bad, having stigma against things can be good. People obviously disagree with you on what you call bad stigma. Not having stigma against things can also prevent growth. Doubling down on psychosis isnt growth. Now going back to the example that you ignored - being an infinite enabler on the basis of "open-mindedness" and on the basis of blind-hermeneutics is not good. When someone is drowning in delusions (like the healing example that you didnt want to engage with) you dont want to strengthen that just because its a teacher you respect - if you care about them and about the audience who watches them , then you make sure to call it out for what it is. That video is a clear example of someone who is falling into the trap of (and even advocating for) spiritual bypassing (that you are allegedly against) - I was just checking whether you (as a moderator) can stay consistent and call it out for what it is or whether you are engaging in cult-like behavior, where you infinitely praise and make excuses for Leo, and where you feel a strong need to get his approval. And this is not about open-mindedness (its not a question whether healing is possible or not), this is about spotting the pattern of someone having God-like confidence in something and then ending up making completely wrong predictions, while appealing to high consciousness and appealing to his specialness and to the idea that he is the chosen one. A conscious ,intelligent, sane person usually engage in self-correcting behavior. "Fuck, I had a very high conviction and made a dead wrong prediction, let me check and deeply reflect on my epistemic process and on my metaphysics so that I wont make the same mistake again ". Leo has foundational beliefs that are preventing him from engaging in self-correcting behavior and there are no avalaible sanity checks anymore, because everyone is dumb and unconscious and also because insanity is literally treated as the new cool and as an achievement. Where do you see Leo engaging in self correcting behavior and having epistemic humility? 99% of his responses are about being 100% confident about his claims and there being no room for him to be wrong and about him being above everyone. And the funny thing is that none of you would give the same "charity" (unhealthy fan/glaze behavior) to anyone else, you only give this kind of treatment to Leo. For example, pick any random forum member who claimed to be Jesus and check your "open-mindedness" on those kind of claims.
  16. At what point do you say that he was wrong about a given thing and it wasnt just memes? Because this is the blind-hermeneutics move that can be applied to any other thing - like apply it to the Bible "no dude, there is no contradiction in the Bible, you just need to reinterpret every single thing that you think is a contradiction and if you cant , well, then you arent conscious enough and or not open-minded enough" Just checking, because for instance, the video down below shows a vulnerable and desperate dude who is incredibly self-deceived and end up being dead wrong (while having high conviction in the delusion that was acquired through "high" consciousness). 1) Do you think the audience who watched this video ended up taking the message to focus on doing non-spiritual stuff or did they take a message that all their problems will be solved once they manage to awaken? 2) Do you think its good to be an infinite enabler and to infinitely reinforce delusions on the grounds of open-mindedness or just because Leo said so? Also what do you think, what kind of self-deception managing/correcting practices does Leo engage in? Because the casual knee-jerk reaction where he cant be wrong (you either have low consciousness or not open-minded enough) shouldnt fly in all cases , right? Because that means that he cant ever be wrong or self-deceived even though we have clear examples showing otherwise.
  17. The other concept that sometimes comes up is 'faith' and even that is greatly mischaracterized by the west. The idea isnt to believe in something without evidence, faith goes much deeper than just an having attitude towards a given proposition (a statement that can be true or false). We are talking about 'Pistis' - Its a way of living life, its about how you orient yourself, its the thing that opens you up to have mystical experiences, and its the orientation that can make you virtuous. Under modern lense I think it could be roughly desribed as 'open-mindedness', but even that is too reductive. It can be also described as: A mystical union with God. A transformative relationship that involves the whole person: heart, mind, body, and soul. Something cultivated through ascetic practice, prayer, and the sacraments. Other concepts: Theosis (θέωσις) – the process of becoming united with God, or "partakers of the divine nature" (2 Peter 1:4). Faith (pistis) is the foundation for this journey. Noetic knowledge – knowledge not just of the mind but of the heart (the nous), its also known as direct knowledge, or knowledge beyond senses - which is central to Orthodox spirituality. But there are obviously other religions as well that even Leo knows about, but he still charaterize these things in a bad faith way and a good chunk of the actualizers just unironically take it as a belief without doing their research on them first.
  18. Yep - a completely agnostic attitude towards a proposition dont make you use your time and rescources on things. "I have no clue or belief about what kind of God exist or whether God exist at all, but let me use my finite and precious resources on checking whether what Leo said is true or not and let me not prioritize checking any other claims over it"
  19. All the arguments can be mirrored against individual , lone wolf approaches. You are cognitively put together to be much better at spotting biases in others and you have a trash ability to spot your own biases. All the self deception arguments are applicable to the individualist approach as well, but its not that cool to talk about that, because its much better to feel special and unique. The solution isnt to do everything alone, the solution is to first acknowledge the power of groups and then create corrective mechanisms that can help alleviating those group biases. One such a thing is to make different collective intelligences and groups to engage with each other.
  20. Whats the argument for the "not in all domains" idea? You dont even need to defend that - just defend the claim that spirituality is such a domain.
  21. This makes sense under how you described the two axis. Your model seems coherent.
  22. Now bringing this all back to the enlightenment talk - him saying that enlightenment is an illusion can only be the case if he uses a different definition for enlightenment than how nondualists use the term (because under how nondualists use the term enlightenment, saying what Leo said would be incoherent). If Leo would have wanted to actually engage with how nondualists use the term (Like imagine actually engaging with the substance what someone says), then he wouldnt have given the responses he gave, but he was busy with projecting very uncharitably his own definition in mind. And btw projecting and assuming a particular meaning isn't necessarily an issue (thats how most communication works - qualified when you have good reason to think that the other person uses the same meaning behind the terms as well), but when thats clearly not the case, then its obviously uncharitable and an issue.