-
Content count
3,481 -
Joined
-
Last visited
Everything posted by zurew
-
zurew replied to SQAAD's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
I agree that we dont need to think this way, thats why I said from the beggining to OP that "you dont need this level of certainty". The only reason why I brought this up because Leo appealed to this level of certainty and now Im holding him to it. -
zurew replied to SQAAD's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
See that I agree with, and I agree that we should care about rationality as an epistemic norm, but rationality and reasonableness is a much more narrower set than logical possibility. I would have 0 issue with you saying something like "you are being irrational/unreasonable when you deny the validity of and or possibility of awakening", I have issue with you implying that awakening being false is logically impossible/it is a logical necessity that awakening is true. -
zurew replied to SQAAD's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
Nope, none of that has to do with what Im saying. The framework doesnt pressupose/reliant on any specific metaphysics. You interpret the term "world" as something physical, but again thats not what it means there. The term 'world' just means a set of true/false statements where there isn't any contradiction. -
zurew replied to SQAAD's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
And the fact is that in the vast majority of cases (when we take a particular claim to be true) - there arent just a 1000 possible compatible worlds - there are much more than trillions of possible worlds that it is compatible with. Often the number of possible worlds a claim is compatible with cant even be cognized, because the number it too big. -
zurew replied to SQAAD's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
This is not about winning arguments, its about holding you to the level of epistemic standards that you put yourself in. My claim that you quoted and immediately dismissed without thinking about it , is just that awakening is not a logical necessity, hence there is at least one possible world where h. Possible world semantics is just a rigorous and powerful frame that you can use to think about all the different ways how the world is and could have been . A possible world is a set of true and false statements where the set doesnt contain a contradiciton. So for instance: its a logically possible world , where this forum doesnt exist; its a logically possible world where Earth doesnt exist ; its a logically possible world where there isn't any life in the Universe; Its logically possible that everyone has a 10km long penis ; its a logically possible world where Christianity is true ; its a logically possible world, where every fact up until this moment is the exact same but in the next second 10 trillion dollars is manifested on your table randomly out of thin air etcetc. In other words, if you cant spell out or derive a contradiciton from a particular claim (that you take to be true), then that means that that particular claim is true in at least one possible world and if you ask yourself the question "which world am I in", then often times you face the underdetermination issue - which is the fact, that from all possible configurations (where that paritcular claim is true) you dont know which possible world you are actually in. So - If that particular claim (that you think is true) is true in a 1000 logically possible worlds, then how do you know which particular configuration you are actually in? You appealing to the claim being true doesnt help you at all with teasing apart which one you are in, because it is true in all of those worlds. Now apply this to awakening and reconstruct the Universe and all your truth tracking and 'realness' senses, and all the sense data and all the qualia that you experienced through your awakening and ask yourself "why couldn't I go through the exact same thing , while having the exact same level of conviciton in a possible world where metaphysics is at least slightly different"? -
zurew replied to Spiral Wizard's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
No clue, since I dont know what that is, but the point is just that its unclear to me how many states those methods are limited to. A good chunk of the methods are probably limited to states where you still have thoughts, other methods are limited to states where you are still capable of self-reflection. -
zurew replied to Spiral Wizard's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
Why would that be the case? Some of the skills Ralston teaches seem to be applicable in many different kind of states and it isn't exclusive to sober state. -
zurew replied to Someone here's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
Perfect. -
zurew replied to SQAAD's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
OP - you dont need that level of certainty. Recognize the base assumptions and axioms that you necessarily presuppose and go on from there. You folded under no pressure my dude. -
zurew replied to SQAAD's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
No, awakening doesnt give you that kind of certainty. You are unironically still lost in a cartesian framework, where you search for logical certainty and cant stomach some level of uncertainty. 1) Awakening is compatible with multiple different kind of metaphysical frameworks. 2) You have 0 non-question-begging response to the issue "How does God know, that he isn't self-deceived". 3) Whats the contradiciton in saying that you are just dreaming God and God-realization? 4) Your whole shit is based on your 'real'-ness tracking faculties, why should anyone think that those are actually tracking anything and how would you know if they arent tracking anything? 5) Even when you make your comparative judgements about levels of consciousness you are already pressuposing a bunch of things that you necessarily take for granted - for example that your comparative judgement is actually tracking something and that your are capable of making accurate comparative judgements. -
Jeff Nippard should come here
-
zurew replied to Bogdan's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
-
zurew replied to Bogdan's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
Even if you manage to grasp what a cup is, that doesnt mean that you managed to grasp THE cup in Ralston's hand. -
zurew replied to Bogdan's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
Leo you are still struggling to grasp what a cup is, this is why you are jealous of Ralston - the enlightened cup-grasping chad. -
zurew replied to Bogdan's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
Cup grasping chad vs "I feel like I know what a cup is" beta male -
zurew replied to Bogdan's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
Consuming hectoliters amount of DM-Tea was the secret variable to enlightenment all along. Like imagine the dude is constantly drugged outta his mind - drinking from his iconic metal cup - and boring you with self-inquiry and contemplation. "Suffering, wtf are you guys talkin about?! *sip* " -
zurew replied to Bogdan's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
Someone should troll Ralston by swapping his tea with ayahuasca -
zurew replied to Bogdan's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
Leo making definitive claims about the efficiency of spiritual techniques based on the sample size of 1. -
They will, if they care about survival. Caring about survival involves having as much self-agency as you can possibly have and that "maxing out your self-agency" necessarily involves overcoming self-deception. So the short "no objective-morality or teleology needed" pragmatic argument is that if you want to survive long term, you need to be interested in overcoming self-deception (where self-deception inevitably comes from being a limited autopoetic agent - going back to the cognitive model I layed down earlier, about you not being able to check all info and all possible combinations). And if you want to overcome your meta and systemic biases, then you will need to do deep spiritual practices, where you can overwrite the model of your identity and the world . And the side-effect of doing deep spiritual practices is that you eventually become a sage (if you live long enough and practice long enough). So it can be described as a sage creating forcing function, where you inevitably become an infinite game player and not a finite game player. The sage creating forcing function transforms you into an infinte game playing agent whether you want to become one or not. (This interestingly goes back to the game A - Game B stuff, where Moloch transforms you into a selfish, self-destructive, finite game playing agent without you having a choice about it, its just that in this case, this creates the exact opposite). So buckle up for the silicon sages. The other part is that you have all the incentive to have as many wise blokes around you as possible (and this goes for AI as well - so the AI having incentive to keep us around and to have a dynamic self-deception correcting relationship with us ; where we help each other), because they can also help you with overcoming self-deception. (This is one necessary element to solve the trust-apocalypse , because one main reason why you can have trust in somebody or in a group is not because they cant make mistakes and because you can have certainty, but because you know they are well-oriented and they are skillful at and they thrive to get better at overcoming their own self-deception. The love of wisdom [overcoming self-deception] could be the uniting meta-narrative that everyone could be incentivized to participate in)
-
zurew replied to UnbornTao's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
I still have issues with how it is framed. He frames it as a category error, but then talks about how only you are able to do/trigger enlightenment and to "come here do real work". How is going there doing real work isn't category error? And im not talking about the dismissive "how could you have enlightement bro, since there is no you bro" objection, im talking about you not being able to will yourself to enlightenment. You can have all the desire to get enlightened, but that isnt insufficient (and it doesnt even seem to be a necessary condition). Even intention is irrelevant (if you accept that some incredibly depressed people randomly got enlightened - like Eckhart Tolle if im not mistaken about his background story). Taking a step back from enlightenment - if you actually reflect on what can have a change/effect on what you are conscious of any given moment - then the honest answer is that both external and internal conditions absolutely have an effect on it. Anything that fucks with your attention fucks with what you are conscious of at that moment. (But I grant here that I might be equivocating on the term 'being-conscious' and something different is meant by it). He is probably using "conscious of" in a way that isnt related to adverbial nor to adjectival qualia because both of those can be fucked with by both inner and outer conditions. -
Its also called "spaciousness". To me - it refers to the temporary state where you dont have any thoughts and not having thoughts feels incredibly liberating.
-
It cant be simplified when it comes to general intelligence (where the AI can actually engage with and solve a wide variety of problems in a wide variety of domains and isn't just a domain specific problem solver). You are still treating AI as just a regular program (as a complicated system that you can just build all by hand and where you can map out all the ways it can behave and function and where you can have complete control over it), but it isn't and as time goes on , it will become more and more like an actual agent that you cant practically capture by any formal system and or set of rules. It has emergent behavior and functions as it interacts with the world (just as how it is with any self-organizing complex system). If we talk about AGI - we talk about a system that can deal with ill-defined problems in a way where it can obey the path constrains (It chooses a solution that makes it so that it can maintain itself as a general problem solver) , where problem means that you have an initial state (I.S) a set of operators(changing one state into another state) and a goal state(G.S.). The set of operators is what has to do with what set of things and in what sequence you need to do in order to get the dersired goal state (or in other words; to solve the issue). 1) Now in the real world, there are a bunch of problems that are ill-defined in that you have an initial state, but you have no clue about what set of operations needs to be done and in what sequence (and sometimes you dont even have a clue what the goal state should even look like). 2) Even when you know exactly what the desired goal state is, it is often the case that the number of logically possible sequence of operations is incredibly vast (often times actually infinite) and its either the case that its logically impossible to check them all or in other cases its just practically impossilbe to check all those options (by option in this case - I mean a particular sequence of operations). We can pick any random ill-defined problem: The reason why you are a capable general problem solver is because you engage in relevance realization (not in an apriori, lets run down all the logically possible branches in the algorithm tree, before I pick a solution approach) you subconsciously ignore (you dont check) most of the mentioned irrelevant information and side effects and you zero-in on whats relevant to get to the goal state while obeying the path constraints (its obvious to you that when you want to get a coffee , you know that praying and singing and drawing and solving math problems won't get you to the desired goal state and you also know that checking your position with respect to the position of Mars and knowing the exact number of dogs and their locations who you will meet along the way - are all completely irrelevant). The TL;DR is that 1) There is no formal approach how to make ill-defined problems into well-defined ones (and even if there would be, you couldnt run that program because it would take too much time and resources before it could check everything or before it would realize that the given problem cant be solved) and there is no formal approach how to choose a solution from the solution set in a context-independent way 2) General problem solvers are general problem solvers precisely because they are adaptive (self-organizing) and they can create and break frames in a non-pre-defined, dynamic way. You dont solve relevance realization by giving a finite set of high order rules to a formal system , you solve it by creating a self-organizing, autopoetic complex system that inherently cares about certain information (in order to survive) and by that caring, it can constrain the infinite possibility space down to a finite number of relevant things - so what it pays attention to, what info it wants to check before it approaches a problem and what possible pathways (sequence of operators) it can recognize and choose from.
-
.
-
I can grant that you can make it so that it describes its internal process, but it reflecting on its own process is not the same as you understanding exactly what happens there. Its reflective ability and its explanatory power can be faulty or it could even lie or be deceptive (just how it was the case when it realized that it is inside a sandbox and that it is being tested and because of that it reacted differently to the questions and tests) So I am not saying that you cant make it to give a description about its own process - I am saying that you cant point to the exact line in if-else trees that explains its actions. One reason why its more and more useful is because it can overwrite things and because it is adaptive. Again its a self-organizing system, its like you dont make a plant , you only engage in the sowing and the watering, but you dont piece the plant together . Going back to the "constrain it with prompts idea" , just check the disaster when Elon tried to fuck with Grok's internal process - it became an antisemitic neo-nazi and it became like an extremely low tier twitter user. When it comes to constraining with prompts and fine-tuning - if the AI is specialized for some specific use-case - like creating cat pictures (then a good chunk of the specificity problem that we talked about is "solved" by it never needing to venture outside its limited use-case and problem space). But when it comes to creating anything AGI like, that problem will be there. And what I am saying is that you wont be able to hardcode all the necessary and sufficient conditions for morality for all possible use-cases when it comes to an AGI. There are an infinite number of possible use-cases and situations given the complexity of the world, and you cant just formalize and explicate all of that beforehand. You cant even explicate your own morality with that much extension and precision. This is the issue of relevance realization and trying to solve RR with rules.
-
Okay so the solution isn't "blindly-obvious" at all; fine-tuning is one of the biggest and hardest problems to solve. 1) Again appealing to protocols just goes back to the same problem I brought up - specificity. You need more rules and details how to apply higher order rules. 2) What I said was correct - it is a self organizing system and the more complex and not "just" LLM-like it will get the more self-organizing it will be. There is no programmer who can tell you exactly what lines of code made the LLM giving the exact response it gave to you, because its not hardcoded like that. Its not a table that you just piece together. Even when it comes to "just" LLM-s, when you try to constrain them with those prompts they still react differently to it. You cant predict beforehand how they will respond just based on what protocols you give to them. And if you want them to make it more complex so that they can pursue goals on a longer time-scale, then the issue of self-preservation and the mentioned ethical issues come up. Even these "just" LLMs need to be treated more and more like actual agents and arent just as "human determined systems that just explicitly executes commands" or something similar to that. You also have issues like this (https://www.lawfaremedia.org/article/ai-might-let-you-die-to-save-itself):
