Carl-Richard

Moderator
  • Content count

    15,123
  • Joined

  • Last visited

Everything posted by Carl-Richard

  1. Bill Burr has said something like "some people think you're dumb just because you don't share the same interests as them". The concept of conflating knowledge with intelligence has gotten really clear for me the last year or so. There have been many times where someone else didn't seem to understand what I was talking about, and it somehow contributed to them thinking I'm smart. Conversely, I tend to feel the same way when I don't understand somebody else. I think there is a mental heuristic that tells you "if you don't understand something, it must be due to your lack of innate abilities", while in reality, it's probably much more about your lack of experience in a certain area; contextual factors. It has really opened my mind about how I view "smart people" and how much of it probably boils down to experience. You can also observe it on a micro level in single conversations. For example, if you're talking to a group of people and you zone out for a few seconds, you might find yourself not understanding what is being said, and you might feel quite dumb for the rest of the conversation. But the moment you regain immersion in the conversation, you understand it and you no longer feel like a dunce. In this case, the knowledge about that specific conversation was lacking. As for more general knowledge, I have one particular example that sticks out. So I'm currently taking a statistics class, and I attend as many lectures as I can. I'm in a group project with five other people, and it's generally just me and another person who attends the lectures needed to understand the assignments. Not surprisingly, the other people are seemingly amazed that we're able to understand the assignments, thinking we're so much smarter than them and that this is why we're carrying the group. But in reality, the true difference is that we went to the lectures and they didn't. Now, you can argue that we're the one attending the lectures because we have the innate abilities to understand what is being taught in the first place while the others don't. While this could be true, it could also be that they never attended many lectures and therefore never built up the momentum or continuous progression in knowledge. They do admit that attending the lectures helps them understand it at least a little better. And it's not like me and the other person understand everything 100% either. When we're working in the group, we're constantly learning new things, making mistakes, getting stuck, having insights, making adjustments. We feel stupid all the time, but we still work through it. Truly, if you want to point to an innate factor that is maybe significantly different between us, it's conscientiousness, especially the industriousness part (how much work you're willing to put in), which ties into how many lectures you're willing to attend. But even that can be learned to a large extent. I had to consciously learn how to be this conscientious, or at least how to manifest it in my actions to this extent. Regardless, at least in this situation, it suggests that the main deciding factor is how much work you're willing to put in and the experience you gain from that, rather than innate abilities. And according to this mathematician, if you're behind when comparing yourself to another person in your class, it only takes two weeks to catch up. How? Well, you're in the same class, and the class requires a certain level of skill to get into (which is specially true for graduate level classes). You've also all been in the class for a relatively short time. There are probably many other factors as well, but you might start to see that the main factor is how much work you're putting in (and how it could easily be just two weeks). So there is hope for my classmates and anybody else who might be struggling in a class. This is somewhat related to how sophistry works. When you want to determine if somebody is being coherent but you don't understand them, you go by their level of conviction and other superficial markers like fluency and verbal richness. It's like a back-up plan for when you don't understand someone but you need to know if they can be trusted or not, which is actually very often the case. It's also often required for learning new things. You need to trust in what you're learning before you actually learn it, and if you stop at the first sign of incoherence, you won't learn much of anything. So ironically, you need to be somewhat complacent with sophistry in order to actually become knowledgeable and to be able to spot sophistry when it truly happens. Knowledge is a Catch-22. And also ironically, the people in my group who don't attend the lectures, need to become complacent with sophistry when it truly matters (during the lectures), and not just when they're in the group listening to those who have attended the lectures. They very often think we're being coherent when we aren't, so in those moments, we're being sophists waiting to be called out.
  2. Debating veganism is generally like shooting fish in a barrel (see what I did there? ) from the perspective of the vegan. But like you're saying, it's generally best when the person holding the gun is not batshit insane.
  3. Your Cosmic self (your ultimate nature) is not reducible to your personal thoughts. But your personal self is largely reducible to your personal thoughts. But it's not just reducible to your thoughts. It's reducible to most (technically all) things in your life. This should not be taken as a needless burden but as responsibility, not as fear but courage.
  4. Cannabinoid activity has been shown to increase activity related to the 5HT2A receptors, which is how the "classical psychedelics" (LSD, psilocybin, DMT, etc.) get most of their effects. To what degree this makes weed similar to classical psychedelics, I don't know. But there are many overlaps between weed and classical psychedelics in terms of the reported effects (you can look them up on PsychonautWiki).
  5. It's just an argument to be careful, and it's especially important to be careful when the probability of deception is shown to be high (69%-88% in some cases) and when you have fewer ways of uncovering the deception compared to humans (fluency markers, socioemotional markers, incentive markers, etc.). That is why you should use AIs like Perplexity; cut out the middleman The human sources that Perplexity references could certainly be faulty, but it doesn't help if it additionally misrepresents those sources. That's just more problems.
  6. I just thought about this suggestion (of using AI to summarize a text that is worded clumsily and hard to understand). Will the AI ever respond with "this text is incomprehensible and does not make any sense?" or will it always give you an apparently coherent summary which doesn't necessarily reflect the text at all?
  7. Then the question is: are you trusting yourself as a flawed human to be able to identify the mistakes? Would it be wise to be generally careful with things that you know can possibly deceive you without you knowing it? I sometimes use Perplexity which is an AI that provides the sources it used to create an answer. If you actually control check the sources, the general rule is that you'll find a lot of factual inaccuracies. It's the best way to use an AI imo, but if you have to control check the sources all the time, it works basically just like a sophisticated Google Search. It's useful for that. Regardless, if you use AIs like Perplexity, it makes it even more clear how inaccurate AIs can be.
  8. Meditate.
  9. Video on art plz? Especially explain how modern art works 🤔
  10. There are a lot of videos critical of Leo. Is Leo legit?
  11. Just curious, but do you live in an area with high air pollution (PM 2.5)? I got myself a PM 2.5 measurement device and an air purifier for when I lived close to a highway tunnel last year, and it helped a lot with various symptoms (eye, nose, throat and lung irritation, coughing, sneezing, runny nose, and shortness of breath).
  12. Well, there is "body language analysis", and then there is body language analysis. Two very different things.
  13. I know how to use an AI. I was one of the first ones to use it 😉
  14. Ironic that you asked an AI for that. Those are general estimates, and apparently they're not very good: https://gleen.ai/blog/the-3-percent-hallucination-fallacy/ The AI did not give you that information
  15. They're signs, not proofs. AI doesn't have those signs. Look up the hallucination rates.
  16. https://hai.stanford.edu/news/hallucinating-law-legal-mistakes-large-language-models-are-pervasive I hope you're not using it to learn about law 😝
  17. At this stage, at least more than an AI. Look up the hallucination rates in AI language models. They're staggering. And again, you have more ways to uncover inaccuracies or untruthfulness in humans. Humans generally care about being truthful — AIs don't (they simply happen to be generally truthful if they're coded and trained well). And when humans aren't being truthful, you have many ways to uncover the untruthfulness. A person might stumble in their words, make awkward pauses, blush, avert their gaze, change their posture in a weird way, start fidgeting, becoming restless or uneasy, becoming blunt or defensive, changes in their vocal tone, become emotional or insecure, etc. An AI doesn't do that. I already mentioned markers like variations in fluency and verbal richness (untruthfulness often decreases these things). AI doesn't have such variations. Additionally, you can check which biases and incentives the person has (e.g. ideological affiliations, professional affiliations, economic incentives) and you can judge their character and past actions (e.g. positions of authority which require trust, general reputation, times caught lying). AI doesn't have that (except for past actions).
  18. God damn 😍 As if I couldn't get any more gay for that guy 😂
  19. Hehe, well it's not just a problem of bias, but of incoherence, irrelevance and delusion ("hallucinations"). It's like it's building sentences with these wooden blocks with words on them, and sometimes the words it chooses are from the wrong bucket, but the overall sentence looks relatively fine. So there is an additional deceptive element to the misinformation, which is scary. You can't use the back-up plan for determining coherence in humans on an AI, because they're always as fluent and verbally rich as they are when providing accurate information.
  20. I might try that actually. Even if it provides misinformation, it's better than nothing
  21. By the way, a quick tip for anyone who wants to "increase" their intelligence, or more practically improve their work: start high-intensity cardio (e.g. sprint training). If you want proof, just re-read the thread and see how much easier it is to read it now (I revised it after my sprint training ). (Of course a confounding factor is I slept really bad the day I wrote it and ate really bad food the day before; thank you 17th of May, our national holiday ). I might as well drop this one in here as well:
  22. For once, I managed to decode one of @Reciprocality's posts. He is not giving a prescription that you should try to be a generalistic as possible and not engage your mind in detailed, concrete knowledge. He is basically re-phrasing what I said in his own words: intelligence is a generalistic thing, and the more generalistic you are, the more you're able to generalize. I agree. He is a tough nut to crack. I generally (using my generalistic abilities) avoid trying to understand his posts, but today I had the impulse, stamina and luck to try and succeed. That is the strength of the "neuroticism" by the way. It sometimes throws you a curve ball that you manage to deliver right in the corner of the net. On that, I'm about to talk to my potential advisor about a potential project about mindfulness and mind-wandering/"neuroticism" (and many other ideas). She is coincidentally a researcher on mindfulness and the leader of the institute where I got most my education, so that's fun