zurew

Member
  • Content count

    3,132
  • Joined

  • Last visited

Everything posted by zurew

  1. This kind of brain rot is largely Leo's fault (for making that kind of epistemic standard default) even though he can't live up to it either. Where do we see on this forum people making argument that establish their conclusion with 100% certainty? Nowhere. And of course, as much as we like to point to the problem of induction , there is a problem of deduction as well, but we can just ignore that because its not convenient for collecting social credit.
  2. Goggins is a fucking menace. He is barely sweating as a 50 years old with a broken body and fucked up knees, while the pro mma fighter is on the verge of death and throwing up from the intensity of the workout
  3. I guess its not that easy (but its still relatively easy) if I combine the two claims 1) you can pass down your lived experience 2) you can "transfer" memories with an organ transplant, This hypothesis can be disproved (outside of showing issues about theory of memory and other stuff) showing this: the person in question can't be traced back in the family tree and also had no involvement in any organ transplant in which the recipient was one of the boy's ancestors. Pretty wild claims, but I wouldn't 100% rule them out.
  4. The idea would be that you can pass down your lived experience and not just your genes and some way its possible to tap into that. This hypothesis can be easily disproved, if its clear that the person in question can't be traced back in the family tree. I dont know whats the research on this, but I have seen stories when it comes to heart transplant, where some patients claim that they have new memories and feelings about things that don't seem to belong to them. If its indeed possible to "transfer" memories with an organ transplant, then I wouldn't rule out the possibility that you can do the same with smaller things. In this case the explanation would be that you can tap into the memories of your ancestors not that your soul or mind reincarnated in a new body.
  5. Jordan Hall would summarize what you said with "digital vs analog" (implying constant change and the inability to pin down things) But yeah it seems that you are talking about things that would ground some of the things that I take for granted in order to make my argument possible (theory of meaning, theory of truth, what makes intelligiblity possible, what is a category etc) Those are discussions that are all beyond me and its very likely that because I dont understand how many ways those things can be cashed out, I unconsciously just use a set of theories that I can't even explicate, but my ability to analyze those things are all subject to the limitations of those theories. All those unconsciously affirmed/begged theories are embedded in my thinking every time I attempt to analyze things. But I guess this is a discussion for later, for a time when I have a better chance to track whats happening when those discussions come up.
  6. Yeah im kind of into his work and again - yeah everything that you are saying seems to be 100% compatible with what I am saying. "There’s a direct relationship between truth and relevance realization." - would this proposition become suddenly false if I had the opinion that its false? No it wouldn't, therefore its truthvalue isn't dependent on my opinion, its true independent from what attitude I have towards it. And we can apply the same thing to the proposition of "it’s only true that poison is poisonous in so far as there is an agent for whom this fact bears any relevance."
  7. I guess I just dont see how that contradicts what I said. I dont think I necessarily need to affirm a particular metaphysics (that wouldn't be compatible with what you outlined) in order to make the statements I made. I will lay out what I believe and assume is happening here, but again I can be wrong and Its perfectly possible that I don't track at all. Working with the poison example further - its irrelevant whether being poisioned is a relational phenomenon or not, what matters is what makes 'you dying from consuming poision' true or false. Does Trump telling his opinion about this particular matter has any weight whether it will kill you or not? No, even if Trump tells you that it will kill you, thats still irrelevant , because sure his opinion can be right, but you won't die because he said it or because he had the belief that its true, you die because its a fact of the world. We can define facts in a relational way, but I think that won't have any bearing on what I am saying (because in a similar way I can have false beliefs and opinions about those relational facts as well). Further clear up - by opinion I just meant having an attitude towards a proposition and by proposition I just mean a declarative statment that can be true or false. There are truths that are true independent from what attitude (opinion or belief or preference) we have about them - this is what I meant by objective truths and by independently true, I mean that even if all agents would change their opinion about a particular proposition, the truthvalue of said proposition still wouldn't change (in the case of subjective truths, it would change).
  8. I might have a completely wrong read on him, but I think that he is trying to establish objective morality in the sense I outlined it, but I can be wrong. To me this is similar to how he uses the term "God". Given his definition of God "whatever is on the top of your value hierarchy", all atheists can say that they value God and that they believe in God, but lets not pretend that given this completely different sense of God that somehow he established that all atheists believe in some kind of all powerful , all good Mind. Also to be clear, it doesn't have to be transcendental in the sense that 'truth existing independently of all life' (like truth existing in some weird realm independent from this world), its just that its not dependent on the opinion of any agent or any group of agents. Its similar to the idea that consuming a large amount of poision will kill you, no matter what any group of agents say or think about it (because the truthvalue of it killing you isn't dependent on their opinion.) This doesnt entail, that the truth of the poision killing you exists in some transcendental logic realm, because it can be dependent on the laws and vulnerabilities of this particular world (where changing the truthvalue would be done by changing physical laws and it wouldnt be done by changing the opinions of people) So the definition I gave is compatible with both a transcendental realm, but its also compatible with it being the fact of this world.
  9. I dont think thats the issue, the issue is that (as almost always) there is an equivocation going on. I dont want him to make a syllogism , I want him to be honest and not confused about what argument he is actually making. It has to do with what is meant by the term objective morality - if Peterson uses that term as something like "there are perennial patterns and acting those out will lead to certain outcomes" sure, I can grant that - but that doesn't really respond to the issue of subjective morality (the position where the truthvalue of moral statements are dependent on a subject or a group of subjects - where if they change their stance about a particular value , the truthvalue of those moral statements change as well). Peterson's "critique" is not a reponse to subjective morality, its just a completely separate claim that can be denied or affirmed completely independent from what position you have on subjective morality. What I would look for is an argument that establish that there are moral statements (statements that actually use terms like good , bad ) that are meaningful and that can be true or false completely independent from what any individual or what any group of agents think about them. So under this definition of objective morality, for example the truthvalue of this moral statement 'rape is bad' could be true even if all people on Earth would think otherwise. Making an analysis that ends with a conclusion that has a set of objectively true descriptive statements in it (like rape will lead to x,y,z outcome) has nothing to do with morality. subjectivists can agree with all of that even if some of them think that rape is good. And to be clear, I dont care about the definition game, what I care about is this - if Peterson wants to critique objective morality (under the definition how most people use it), then he should index his criticism to that, but using the exact same term with different semantics doesn't really do the job , the only thing he esablish with that is that he makes a completely separate claim ( and thats all fine as long as he isn't confused about it and as long as he doesn't pretend that he established objective morality in a different sense).
  10. Well mr Peterson that just seems to be a descriptive claim about what set of values and what set of behaviors would be most aligned with human flourishing - but its nothing more than a descriptive claim, there is no ought embedded there. Its not just that its not an objective moral claim, its that its not even a moral claim at all. Its similar to giving a very precise physics equation about how the rock will behave once you interact with it in a certain way and then saying that it ought to be that way and that objective morality is established by that.
  11. Yeah I personally think that Peterson is confused about morality. I dont see how him and Jonathan Pageau talking extensively about all the archetypes and perennial patterns would be reflective of an objective morality, to me its just a description of our collective unconscious and our conscience at best - but its nothing more than a description and there is no 'ought' embedded there. There is a big difference between making an analysis of what kind of values most people have and how we act and behave and what the outcome of that vs making moral statements that are true independent from what any particular person or what any particular group of people think about it. They haven't made any argument (I am aware of) , that wouldn't be compatible with subjective morality.
  12. "Let me a grab a random set of metrics and then let me assume that the data the AI will present me with will be accurate" Wow, not all practicing doctors conduct experiments and doing research and spending their time studying philosophy of medicine on a daily basis? 1) If you don't want to create a world where each practicing doctor is freely allowed to come up with their own epistemic and ethical norms when it comes to treating patients, then you will eventually end up with a system similar like this. 2) You probably don't want all doctors to do experiements and to do research - in a working society you want some doctors to spend their time treating patients. Its very clear, that some of you abuse the fck out of the buzzwords that Leo shared with you like "holistic" or "appeal to authority". With regards to the appeal to authority - yes it can be said that its fallacious reasoning , but thats not the enire story, because it can be used as a heruistic (where given that you have low info and low knowledge in a given field, you assume that whatever the experts or the expert consensus concluded will be probably your best bet). You don't know how to even properly contextualize and what kind of norms to use to properly evaluate the data infront of you, because you are not trained in the field - so the question is why dont you ever question your ability to make reliable inferences about fields that you have 0 training in? The funny thing is that almost everyone can recognize this when it comes to fields where your assumptions are tested immediately (like engineering jobs and roles). You cant just come up with your own set of metrics and norms and then build a bridge or put a car together. "Bro you haven't directly tested how flammable gasoline is, you just believe in the dogmas that the stupid and unconscious experts feeding you with, get more holistic and wake up from the matrix". Given the complexity of medical fields and given that you cant conduct experiments on a big sample of people and given that you have no ability to even begin to isolate variables , you can infinitely bullshit yourself and pretend that you are smarter than everyone else and that you have some kind of special insight. ---------------------------------------------------------------------------------------------------- When it comes to the holism and holistic part, just because you use more norms from that doesn't follow that you will be more accurate. I can have an epistemic norm of observing the grass for 10 seconds and if the wind blows the grass within that timeframe then I will infer that the answer to my question is yes and if the wind doesn't blow within that timeframe then I will infer that the answer is no. I can then integrate this epistemic norm with my other epistemic norms and pretend that Im more special and im smarter than people who havent integrated as many epistemic norms as I did. ------------------------------------------------------------------------------------------------------ When it comes to the direct experience criticism - what do you think you are saying there? Should all doctors try all the treatments and all the pills on themselves before they prescribe anything to patients?
  13. Yes, I agree the way I outlined it is not the only way we use the term, but I thought you were using it in that sense, because you were originally responding to Nilsi about metaphysics. Regardless, my main point is that given that this term can be used in multiple ways, we should use it context sensitively so that we don't engage in equivocation.
  14. I will add one more to the necessary conditions - You are not yellow if you don't look like a redneck https://www.facebook.com/photo/?fbid=10162079724626779&set=gm.8315254938529114&idorvanity=8277508165637125&locale=de_DE
  15. In that case , you are using reductionism basically as just "explanation". I take reductionism to be a special position with regards to realness and I also take it as a subset of explanations - where it is an explanation but a specific kind - where the notion of realness is only accounted by fundamental parts (this is where the 'you are just atoms bro' come from) . The idea that" lower levels" can exhaustively explain things that are on the "higher level". Basically, I take it to be the rejection of strong emergence , where higher parts have certain causal powers that cant fully be accounted by their lower parts. To me its very clear that if there are two people and if one of them believes in strong emergence and the other doesn't, then calling both of them reductionist would be very misleading.
  16. It supposed to mock the idea that some MAGA has that creating more factories and therefore more factory jobs is actually so cool because factory work is cool and masculine and people need those jobs over some gay liberal office jobs.
  17. https://x.com/jamiewheal/status/1910704519693971812
  18. @Leo Gura Where do you put Jordan Hall?
  19. That way of using those words seem to be wildly misleading and inappropriate in most contexts. When you give for example a causal explanation, you dont suddenly provide a new substance to thing that is being explained . "Why are you drunk? Well, because I drank 10 beers" - did I provide 'drinking acohol' metaphysics to being drunk? - that question doesn't make much sense. Or another example would be saying that the reason why matter exists is because God created matter - that doesn't mean though that God is made of matter. John Vervaeke has a metaphysics that very clearly don't buy into the idea that things can be exhaustively explained by or that things can be reduce to their simpler/smaller components .
  20. Yeah, I think people here often times confuse having the "correct" take with level of development. I dont know if you have seen that convo, but this kind of goes back to the morally lucky convo Destiny had with Rem about Hasan. It was almost exactly was you said there - the reason why you(in that case Hasan) is not a neo-nazi is not because of your level of development or because you actually reasoned your way there on your own , but because you were lucky that your close environment indoctrinated you with beliefs that we collectively take to be more acceptable and correct. "If you are a very good reasoner and you have the ability to synthesize and to juggle mutliple perspectives towards an acceptable moral and value system that is aligned with mine - you are highly developed and very much above orange and at least yellow , but if all the same things apply except the fact that you have a different moral and value system you are stuck in orange at best".
  21. And thats just the start - we can easily attach other arguments to this like you cant maintain a finite planet with exponentially growing energy need. Before anyone would say "but efficiency bro" , that doesn't work in practice mostly because of jevons paradox. The more efficient shit gets the more accessible it gets and on a net scale we end up spending much more energy. Today, the average person uses more energy in a single day than what a king did in an entire year centuries ago. AI just makes this whole thing 10x worse, 1 because it makes things more efficient and 2 because the better it gets the more things it can be used for and this goes back to jevons paradox. We also have no good way to properly price things (in a way where the price is not decontextualized, but its contextualized in the context of the whole world) - which inevitably leads to the fact that we externalize harm - why? Because we don't immediately need to pay the price for it ("other stupid people who will be directly affected by it , will pay the price for it") - if everyone would need to pay the real price for it almost all businesses would go immediately bankrupt. And we can go on with other issues (AI alignment issue, environmental issues etc) ,but one main point is that a naive "but history though" doesn't work, because shit is widly different now in multiple ways.
  22. I agree mostly, but you can be rational and have a lot of IQ and good reasoning capabilities and still be lost. You probably agree with this, that having those traits won't help you with navigating self-deception - in fact, in some cases, having those traits absent of some other traits will make you more prone to self-deception. Motivated reasoning, when you are good at it can be crazy misleading and the better you are at it, the higher the chance that you will manage to bullshit yourself.
  23. This will only get worse, unless something tangible will be done to restore the trust in institutions and authority figures. You either use proxies to make sense of things for you or you make sense of things. Each have downsides, but the fact of the matter is that you don't have the time nor the expertise to properly evaluate all data, especially given that we are talking about multiple domains. Because of the distrust - even if these people agree on all the facts , they will choose the most adverserial and bad-faith explanation , no matter how far-fetched it will sound , because that will be the one that will match closest the model that they have about insitutions. This problem is way beyond what any doctor can handle or deal with. Even if one doctor would know all the facts and all the conspiracy theories and could respond to all of them perfectly, this shit wouldn't change or move most of these people and the lack of trust would remain the same. You wont increase trust by being right, but you can destroy trust forever just by fucking up once. If there is a new situation where there is any lack of certainty - the worst possible and most adverserial scenarios will be projected on it by default.
  24. Yeah sure, but once the purposes are specified, we can give an answer to that question. If we care about x set of values and we are clear about what that set contains, then we can go on talking about how does Trump affect those values. Of course, there can be other layers of disagreement like - we disagree about how should we measure how much we progress or digress from that value set, but hopefully we can ground that in a shared meta-epistemic norm and using that we can figure out which epistemic norm is better for measuring more accurately. I think a lack of clarity and a lack of explication of what kind of norms we are using to collect a set of facts and what kind of norms we use to tell a story about that set of facts is one of the things that makes us very much prone to self-deception because our underlying biases can hop in the evaluative process (especially when it comes to the story telling part - the weight of each fact will be different and even what set of facts you collect will be different). But yeah, of course this goes much deeper , because there are some meta-norms (that has to do with relevance-realization) that we unconsciously use to determine what we consider reasonable vs far-fetched and those meta-norms will probably never be exhaustively explicated, and because of that some disagreements in practice wont ever be "solved". Even if we agree on an argument (with all the premises and the conclusion and with the rule of inference as well), we can still disagree on what kind of implications come from the conclusion. There is an infinite set of logically possible implications that can come from any given conclusion and this goes back to the "reasonableness" problem I outlined above, which I have no good answer to, other than we should train our ability to explicate those epistemic norms as much as we can (so that ever deeper layers of disagreements can be specified, pointed out and then argued about without being vague). Tldr - ultimately there has to be a norm when it comes to navigating any disagreement because otherwise disagreements wouldn't be possible. But exhaustively explicating that norm is impossible in practice, so we probably never solve the deepest disagreements.