aurum

Member
  • Content count

    5,898
  • Joined

  • Last visited

About aurum

  • Rank
    - - -

Personal Information

  • Gender
    Male

Recent Profile Visitors

18,615 profile views
  1. I don't think I have some simple set of heuristics. I mostly think in terms of tradeoffs and what will lead to greater societal development / holism. I prioritize depth of sense-making, not immediately actionable solutions.
  2. That's another thing I feel sometimes gets missed in this AGI discussion. People conflate AGI with human-like or even god-like intelligence. But you could make an AGI that is still relatively dumb compared to humans. That would probably be the starting point. So even if the tech bros create AGI in the next couple years like they're betting on, they're also betting on that this AGI will be of at least human-level intelligence. That's an additional bet.
  3. To some degree, but I'd say that's a largely incomplete theory if we were just to stop there. People in poverty often have some of the highest birth rates. Birth rates are some complex mix of biology + contraception availability + needing children for economic reasons + cultural narratives + gender equality + environmental constraints.
  4. I like Bashar's thinking here, but I'd push back in a couple ways. 1) Yes, true intelligence operates on whole systems thinking. This is correct. But it would be a mistake to assume whole systems thinking = totally harmless. A true intelligence could still make decisions that cause tradeoffs within the system. 2) He seems to think that AI will be more intelligent than humans, and we will have to catch up to its level. I suspect it's the other way around. Humanity will align with greater intelligence, and then we may create intelligence. Greater intelligence coming first feels backwards. 3) We are not close to building the kind of AI he is describing
  5. Okay, but now that essentially creates a two-tier system, where the rich can afford to opt out and raise their children how they like. Whereas poorer people who need the money will be subject to state regulation and bureaucracy. It would be analogous to public and private schools. Private schools have become a luxury good. Is it worth it? Maybe. The point is simply to not be foolish enough to think that socialized motherhood won't have significant tradeoffs. And to think carefully through what they might be, rather than just plowing ahead like a bull in a china shop. They aren't novel. They are extensions of the same general tension between individualism and collectivism. Nothing this absurd is being proposed. Free speech absolutism is obviously wrong. In practice, every society will have to decide how much they want to socialize motherhood. Absolutes tend to be way too politically controversial and impossible to implement. So we end up with some mix. The question is what is the right mix. My rule of thumb is subsidiarity. The state should step in when individualism is not enough, but the state should not come first.
  6. Careful though. Once you socialize motherhood, that opens up a whole can of worms. It's an increase in blurring the lines between private and public life. There will be serious tradeoffs, like additional regulations and politicization around parenting. Mothers need support. But how much role the state should play is not an easy question.
  7. For the purposes of debating whether a crash will happen, it should be considered AGI when it can replace and even do a better job than humans. This is what these companies are betting on, not just cool LLMs. You’re right that we have not seen what massive amounts of compute will do yet. This is my prediction based on how I understand intelligence. Scaling compute will fail. In the future, people may wise up and invest in other strategies. But right now, scaling is the dominant strategy. And it’s an increasingly failing one. This is not just my opinion either. This is the opinion of many serious AI researchers who understand the technical details better than I do. The crash could be serious for the economy because so much is being propped up by investments in AI right now. Whether or not it will be as big as 2008, I don’t know.
  8. I don't know If there will be a huge crash per se, but I know many of these big AI companies are overhyped right now. They will not create AGI any time soon. These CEOs are betting on that scaling compute is enough, and it very clearly isn't. They need that to be true, because that theory is what fueled the success of these companies in the first place. We got GPT-3 and the other current LLMs because of scaling. If scaling doesn't work moving forward, they are cooked. What we have instead is a non-intelligent tool (LLMs) that appear useful in some limited contexts such as coding, customer service and brute-force calculation. But this does not justify the insane amount of money coming into these companies. These companies are investing in infrastructure assuming trillions in revenue over the next couple of years from AGI. This is laughable. They are in way over their heads with their own investments. All this infrastructure may later turn out to be useful once it's already built, but either way that doesn't mean it isn't going to crash on them before that happens. It very well could.
  9. Agreed. This resolves some of the binary tension between survival and truth-seeking.
  10. Appreciate the work he is doing. But also interesting to note how he completely misses the metaphysics behind it. Peace & Bliss = God.
  11. What a strange thread. Casual sex has extremely sharp diminishing returns. Once you've had some it loses like 90% of its luster.
  12. Hamilton Morris, John Vervaeke, Andrew Newberg, Sam Harris, Anil Seth, Roland Griffiths, Rick Strassman, Robert Wright, Thomas Metzinger, Evan Thompson. What do they all have in common? They all validate mystical experiences, but either deny or remain uncommitted on the actual metaphysics. At least publicly.