zurew

Member
  • Content count

    3,534
  • Joined

  • Last visited

Everything posted by zurew

  1. Given that notion - is the motivation for the claim that higher level intelligence includes better character and more care is something like: To be intelligent is to care about and to recognize fundamental truths about reality - It is metaphysically true, that reality is fundamentally love and that everything is ultimately one and being aligned with that truth means to recognize that fact and to live aligned with that. So It is basically a completely different way/mode of being, where you process and filter information differently and if you are not in that kind of mode of being , you dont have access to / cant recognize certain truths.
  2. Interesting, thank you for the answer. I take it that by "affirm" there you dont just mean an intellectual notion, but you mean a deeper non-propositional notion. Might be similar to how christian mystics use a non-propositional notion of faith.
  3. Thats not the point. You are bringing in normativity but thats not relevant in this specific case. We can label "problem solving" in any way we like, we dont need to put the label "intelligence" on it if we dont want to. But we can descriptively engage with that concept and make inferences about it and think about it. The issue here is that if it is the case that better problem solving can be achieved without the given AI developing any kind of good moral character or care about whole systems, then we might have an issue here (if the developers of AI only care about making better problem solvers)
  4. Well im not sure, because I think you used the same phrase (forgive) but you used two different notions. Here I think you meant something completely different by the term "forgive" Compared to what you meant by 'forgive' here: I would interpret the first notion in some kind of profound participatory sense, where you completely give up your ego and paricipate or merge with God or something like that. And I would interpret the second notion of "forgive" in a sense where you forgive a wrongdoing that might have hurt you and with that you can actually heal and karma wont fuck you up later.
  5. And as an additional point , you can care about the whole while also not care just as much about the parts. There can be scenarios where replacing or destroying parts can be beneficial for the greater whole (easy example - destroying cancer to preserve your body).
  6. Not yet, and it might be the case that LLM-s will never have a character, but I dont see why we would assume that non-LLM based AGI wont ever have a character or that why couldnt it ever overwrite the built in character.
  7. I also have issues with the described holistic thinking idea where self presevation somehow needs to be implcit. I dont see why that would be the case. Why couldnt there be an AI with holistic thinking that doesnt care about self-presevation at all ? Or that it only cares about self-preservation to some degree ,but there are things that it cares about more ( that can make the AGI to self-delete and with that destroy a bunch of other things as well) or it only wants to self-preserve for a random finite amount of time. But even if we assume that holistic thinking somehow necessarily includes the value of self-preservation. Even in that context, there are a bunch of nuanced scenarios where you can perfectly destroy a lot of things without destroying yourself.
  8. I dont know what your notion of "true intelligence" is , but regardless what that notion is, if we create a different notion like "the ability to solve problems" - then I dont see whats the argument for the implicit claim that AI cant become a better problem solver than humans while also having an evil character or a sociopathic character.
  9. Can you unpack that a little bit more?
  10. I dont think the majority of people have a psychology that would allow them to actually do whatever without any psychological pushback. But regardless how many people have that kind of psychology , one thing is for sure - you wont persuade psychopaths to not do bad things by talking about moral realism. They wouldnt care at all .
  11. Thats different from the policy she wants to implement though. I would be curious what empirical data we would see once a wage like policy is implemented. I think for some of the reasons she already layed down we have reason to think that giving wages to moms would impact the emprirics on birthdate slightly differently than the things you listed there. (To just name one reason - the mom isn't dependent on her husband's money anymore)
  12. That would only establish at best that that particular person believes that moral realism is true, but it wouldn't establish it is actually true. But I think the mistake that you make is when you interpret all ought statements as if all of them were moral realist claims - but I dont think thats true. In the other thread, I gave you alternative ways how you can cash out and interpret 'ought' statements under antirealism. So just because someone utters an ought statement, from that doesnt necessarily follow that they are talking about objective moral facts, those utterances are compatible with their subjective stance on the matter and what they personally desire about how other people should behave (given their subjective values).
  13. Shortly on the non-pragmatic side, I havent seen any good argument that establish why moral realism is true. On the pragmatic side - I dont think that pragmatically you gain anything by affirming moral realism (doesnt matter what type of moral realism you go with). And I think you can explain all facts about the world under antirealism. So from my pov, you just needlessly inflate your ontology (just like how you would needlessly inflate it , if you just randomly affirmed that unicorns exist for no particular reason). Depending on which type of moral realism you want to go with: There are versions of moral realism where moral claims can be reduced down to descriptive claims (like certain descriptive facts about the world) - I just take it that : - Those kinds of views are miselading, because I personally wouldn't even categorize them as moral views. - Since they can be reduced down to descriptive claims, they essentially lack action-guiding. There are other types of moral realist views where those normative claims are irreducible . There the issue that I have is that those views are mostly unintelligible to me and those views just lack persuasion. Like why would you ever care about a random irreducible moral claim just based on the fact that it is objectively true? Like imagine there would be an objectively true irreducible moral claim "You ought to walk backwards when you go to work" - like why would you ever abide by that? and if those objectively true irreducible moral claims are such that are already aligned with your subjective preferences, then sure you will abide by them, but not because they are objectively true, but because they are aligned with things you subjectively care about.
  14. I gave you some of those reasons in my previous posts (in the other thread). But I can give more if needed. But I would love to see what is the response to those objections and I would love to see whats the argument for realism.
  15. There can be many different possible reasons. I think one reason could be that spiritual people want to differentiate themselves from religious people and one way to do that is to just negate some of the views religious people take on certain things. One other reason could be thinking through all possible options and actually arriving on their own that moral realism is implasuible or that it is false - but I personally take it that this is much less likely to be the case (just based on the fact , that most spiritual people are typically not well educated about the literature on morality). One other reason could be that they picked up antirealist moral intuitions from their culture and from their parents.
  16. To be fair to you, there are versions of moral realism where moral claims can be reduced down to descriptive claims (like certain descriptive facts about the world) - I just take it that 1) those kinds of views are miselading, because I personally wouldn't even categorize them as moral views. 2) Since they can be reduced down to descriptive claims, they essentially lack action-guiding. There are other types of moral realist views where those normative claims are irreducible. There the issue that I have is that those views are mostly unintelligible to me and those views just lack persuasion. Like why would you ever care about a random irreducible moral claim just based on the fact that it is objectively true? Like imagine there would be an objectively true irreducible moral claim "You ought to walk backwards when you go to work" - like why would you ever abide by that? and if those objectively true irreducible moral claims are such that are already aligned with your subjective preferences, then sure you will abide by them, but not because they are objectively true, but because they are aligned with things you subjectively care about.
  17. You can cash out "ought" statements by relating them to subjective preferences and desires and intuitions. Like the statement "You ought to do X" can be cashed out as "I have the preference of you doing X or I want you to do X" Some of them can also be cashed out as descriptive goal related statements about reality like "If you want to achieve goal x then you ough to do y" - where "ought" just describes a constitutive necessity, namely that y is something that is necessary to achieve X , but there isn't anything in there that moral antirealists couldnt agree with. The other thing is that when Leo makes statements about Love and such is that those statements are not moral realist claims, those are just claims about metaphysics - about the nature of reality. They are descriptive claims, not normative claims.
  18. How is that incompatible with antirealism?
  19. 1) Why would you need to prescribe any universal moral claim? 2) Why would any individual be motivated by what you prescribe?
  20. Its the thesis that there are moral facts about whats right or wrong independent from stances (standards, intuitions,goals, preferences, desires) But again, there is no special entailment by moral realism being false. Even the video that you linked there , that video describes a highly specific moral-antirealist view, where you need to tolerate and be okay with the views other cultures have, but thats not entailed under all antirealist views. There is no line from "there are no moral facts that are stance independently true" to "you need to be okay and you need to tolerate whatever others want to do or whatever views they have".
  21. I think you have a different idea what moral realism means compared to the philosophers who endorse the view. Being highly altruistic and dying for others (if you wanted to bring up Jesus dying on the cross) are all compatible with moral anti-realism. Moral antirealists can also agree with you that enlightened people usually are highly altruistic people and they probably have a natural impulsive to love and help others - all of that is compatible with moral antirealism being true.
  22. What do you think is precisely at stake if moral realism is false?
  23. I didnt think of simple heruistics, I was thinking more about general herustics. This is similar to all issues, in that you can always go one level of abstraction higher, check what set of issues you need to deal with each and every time a tradeoff discussion comes up and somewhat formalize and create a general template for it. The move isn't to turn one's brain off from then on and to not use any fluid thinking anymore, the move is to get a deeper and more systemic understanding of tradeoff issues and then given that deeper understanding , check whether the solutions (that worked in other trade-off cases) could be applied to this specific trade-off case as well. For instance, a general herustic could be is to think through not just first order but second order consequences each and every case a trade-off discussion comes up. (Because you might have a realization after studying these issues in a systemic way, that the cost of delay usually turns out to be the best when second order thinking is applied and not when first or third or third+ order thinking is applied)
  24. The purpose of the example wasnt to show that it is right, the purpose of it was to question and to challenge how and when slipperly slope and caution is applied. The question is when a new thing is proposed like "x should be implemented" what set of heruistics you go through so that you can say "X can actually be implemented" rather than delaying x and saying "there is more work and thinking needs to be done before X is implemented". Im asking what herustics you use (if any) when you think about these things, where the cost of delaying is calculated and not ignored.
  25. Saying to question things to death is completely empty, when there is either no satisfying answer is given when one is challenged or the question is just simply dodged. If we were to go through statistically what kind of responses Leo gives when his fundamental claims are challenged , then my guess would be that 90%+ of those responses involves saying things like "I am the most awake person and you dont understand what I understand and I wont bend over backwards to respond to your pathetic misunderstandings and your closed-mindedness" You mistake the empty saying of "question things, and verify things for yourself", with not being dogmatic, but uttering that statement is compatible with him and actualized.org being dogmatic. If it were the case that Leo wouldn't be dogmatic, then he wouldnt navigate fundamental disagreements the way he does. Namely, he wouldn't automatically just assume (in literally all cases) that his interlocutor is wrong and that his interlocutor is the one who needs to do more verification and more work and more thinking and more spiritual work, but he would be open to the possibility that he fucked up and he is fundamentally wrong. The other thing that shows cultishness, is that no suckup mods and no hardcore Leo followers would even assign 10% credibility to alien awakening and such claims, if they would be introduced by someone other than Leo. The issue is not that his hardcore followers are open to the possibility that such an awakening is possible , it is that given the fact that Leo introduced it to be the case, there is no state of affairs where they would start to entertain the possibility that such a thing is false or implausible and that Leo is fundamentally wrong. Like what divine state of affairs would need to happen, so that they will say something like "Even though I cant be 100% certain on this, I still take Leo's claim about alien awakening to be incredibly implausible and I think he is fundamentally wrong" rather than forever repeating the cultish "I need to do more spiritual work to realize what Leo realized" and then never lowering their credence in his claim (not even by 0.00001%). We also know that no Leo suckup here would ever accept the Leo disagreement protocol used by someone who isnt Leo. Like imagine having multiple drug trips that goes against one of the fundamental claims of Leo and then telling to Leo and to his suckups (after recognizing that there is a fundamental disagreement) that "Well ,well ,well, you guys need to verify what I said for yourselves and you guys need to question things deeper". Do you think the response there from Leo and from his suckups would be "Yeah, I need to rethink the insights from my trips because there is an epistemic issue here that I need to resolve" or it would be "that guy is 100% wrong, he is the one who needs to do more verifying and more questioning, what a pathetic freak, this guy thinks he understands reality better than Leo lol". Which response is more plausible in that a situation? The answer to that question tells you how cultish his suckups are and how dogmatic Leo is. My question for you would be - what behavior you can point to that is done by Leo and by his suckups that showes that they dont use platitudes like "verify it for yourself" and "be more open minded" mainly as a defense mechanism to forever delay changing their own views on things (when they are fundamentally challenged).