thenondualtankie

Member
  • Content count

    391
  • Joined

  • Last visited

Everything posted by thenondualtankie

  1. Can't we just revolt if our big tech overlords aren't sharing the fruits of AI? Also, competition drives the cost of things to zero when there's no labor involved. If everything is automated, nothing is profitable, as long as there are competing entities doing the automation.
  2. Lol this guy on Reddit experimented with the idea of giving 'drugs' to LLMs by injecting randomness into their inner calculations. Since these calculations are layered, the randomness compounds as it propagates through the layers. https://www.reddit.com/r/LocalLLaMA/comments/18toidc/stop_messing_with_sampling_parameters_and_just/ Interactive demo: https://egjoni.github.io/DRUGS/sample_generations/
  3. 4 is way better than 3.5 but it's only worth it if you use ChatGPT substantially throughout the week.
  4. What a strange little twist! That's intelligence. Examples of GPT-4 doing this?
  5. @zurew A child knows the abstract notion of 'not', but often does not exactly follow the instruction when the 'not' is embedded into a more complex instruction. Yet a child has general intelligence. In language models, this happens because its attention heads might not look at the word 'not' enough. Funnily enough, that's exactly analogous to a child where it doesn't follow the instruction, not because they didn't hear the instruction, but because they weren't paying attention to the word 'not'. It went in one ear and out the other.
  6. Zurew, I'd like you to provide some examples of that. You have no idea what you're talking about. GPT-4 does not have perfect reliability in following instructions, including negations. This also applies to humans. It says nothing about its general ability to understand. If it doesn't understand, then how do you explain its ability to solve unseen problems? For example, as I mentioned above, it can solve a wide array of programming problems it has never come across. You could probably google more examples of its problem solving.
  7. A lot of people are in denial about AGI. The reason AGI is coming is because we've already achieved some level of intelligence via things like ChatGPT, and it's a simple scaling of this intelligence. The reason people believe that scaling will happen is because it has shown no sign of stopping. Scaling isn't going to suddenly stop working once we're just below human intelligence level. This is utter anthropocentrism; human intelligence is special and computers cannot ever reach this! Whether ChatGPT actually attains some level of intelligence: if we just focus on the 'IQ' aspect of intelligence, then ChatGPT absolutely has this. This means ChatGPT is able to solve new problems it has not seen before. For example GPT-4 is able to pass software engineering interviews using unseen questions. You can cope and seethe about how 'it's just combining things from its training data, not creating anything new', but that is literally how intelligence works: combining old information to create new. People have this massive bias in favour of 'things will always continue the way they currently are', probably because their entire survival strategy depends on it. You see this especially starkly among people like software engineers, where AI hasn't fully hit them just yet, meaning they can still get away with not facing the reality that AI will surpass their abilities.
  8. Thanks for the writeup. I've been thinking the same for myself recently: I need to create or find an ecosystem that promotes my growth and values. And this ecosystem inevitably involves community. For a long time, much of my ecosystem has been browsing the internet all day. That's something that needs to be replaced. What actions have you taken so far to create your desired ecosystem? What's your plan?
  9. Very interesting article about what will happen to the state of AI if current scaling laws continue to hold until 2030. Scaling laws are basically laws that refer to how intelligent or performant AI models get as the amount of compute and data thrown at them scales (increases). As of right now, the more compute and data we throw at AI, the better they simply get. There seems to be no limit to current scaling laws. Here it is: https://bounded-regret.ghost.io/what-will-gpt-2030-look-like/. Again, this article simply projects the current rate of progress in AI to 2030. Key Findings & Predictions The AI will: Be better at mathematics and programming/coding than all humans. This means that the AI will be doing most of our mathematical research. It will be doing probably all of our programming. Have non-human 'senses'. Currently, the best AI systems have the ability to read text and see images. However, these are extremely anthropocentric ways of interacting with data. This is actually a very fascinating point the article raises: what if instead of adding an ordinary human sense like sound or touch, we added some absurd 'sense' like brain-wave data? Imagine we had a sixth sense that, instead of receiving light for vision, it received brain-wave data and was able to perceive this in an intelligent way (i.e. not in a way where it's just a bunch of fuzz). This basically means the AI will likely "possess a strong intuitive grasp of domains where we have limited experience, including forming concepts that we do not have". It will be able to directly understand brain-waves. GO CRAZY WITH THE ABOVE IDEA! Think of any possible modality, and the AI will (potentially) have a 'sense organ' that can directly perceive this. Molecular physics, genetics, chemical engineering, protein folding, literally anything where we'd be able to collect swathes of data. Maybe the AI will have very strong intuitions about drug manufacturing, and it will be able to create specific types of psychedelics that achieve a specific effect. You want a psychedelic that shows you cats every time? You give it a genetic sample, some recordings of your brainwaves, and it creates the exact drug that would work for you. Well, at least the recipe for the drug. Be able to self-improve. This was actually not really mentioned in the article, but in my opinion it's an obvious next step. Self-improvement and self-teaching is a key component of any kind of intelligence. If humans can self-improve (for example, babies self-improve at walking), then so can any intelligence at or above human level. I think GPT-4 already has the potential to self-improve, but we haven't really come up with a suitable architecture to allow for this just yet. I think it would look something like as follows. Suppose the AI is given full control of a car and wants to learn how to drive. The AI could achieve this by creating a realistic simulation of car-driving (with its expert programming ability) and then attempting to drive for millions of hours in the simulation. Its learnings would then transfer to real life. I hope this wasn't too long.
  10. Success is relative. When you are already born into a rich family, success to you is not simply being able to maintain that level of wealth. Trump is a failure in many regards, for example intellectually and emotionally.
  11. I present two examples for why I believe this. 1. In December, Google announced that they created an AI that beats 85% of programmers at competitive programming. Competitive programming is a form of programming where people compete to solve difficult mathematical / algorithmic problems. Here is their video: Programming is simply a translation task, which language models excel at. It's a translation from software requirements to actual code. This basically means language models should be able to translate any idea to full code once they're powerful enough, essentially replacing programmers. 2. In the medical scene, there have been countless stories and reports where ChatGPT accurately finds a medical diagnosis when real doctors have failed to do so for years. For example, a boy saw 17 doctors over 3 years and only ChatGPT was able to diagnose him correctly (I linked to the Reddit where you'll find similar anecdotes in the comments). This happens because ChatGPT can engage in holistic thinking across its enormous knowledge base, whereas most doctors can't think outside of their narrow medical worldview. Many of you will be starkly aware of this. The bottom line is that we just need to wait a few more years until these AI become more intelligent and much more reliable. This is not just another hype trend like NFT or the metaverse. This is actual intelligence. Edit: Here's a paper released just today by DeepMind. Their AI seems to outperform real doctors for diagnosis.
  12. @UnbornTao I like OpenAI's definition of AGI which is an AI system that can replace humans at 'most economically valuable tasks'. A slight issue with this definition is that AGI will likely have a very skewed skillset, so we may end up with an AI that is superhuman at programming but still wouldn't be able to understand video as well as humans. So AGI could also be 'general intelligence which exceeds or matches humans at many tasks while being shit at others'. 'What is AGI' is almost like 'what is intelligence' so it's rather challenging. We could also go into its impact on society if that's what you were asking.
  13. I agree completely. AI is a hype cycle, but it's not just a hype cycle.
  14. @BlueOak To be honest I actually kind of doubt the usefulness of AI as a therapist. I think generally therapy works because of the human connection element which AI doesn't replace at all in my opinion. What do you think?
  15. If you do extreme practices, you will get extreme results. Psychedelics are just one example of an extreme practice. Yes, blasting off to other dimensions should be classified as extreme. Other such practices include rigorous breathing and yoga techniques, and meditation retreats. Imagine if you meditated for 7 days straight, you'll get some serious spiritual growth from that.
  16. I'm looking to get involved with UK's Psychedelic Society. Here are their upcoming events: https://psychedelicsociety.org.uk/events. I'm just looking for people to share their experiences with places like these.
  17. Leo recently (a few months back) said he's making a game so it could be that. I won't speculate further though.
  18. @Squeekytoy A Discord does not count. I'm talking IRL. @Bazooka Jesus Sick!! could you tell me more? did you get much new age dogma vibes?
  19. bump bump bump
  20. Some brilliant news! Biden exceeded all of our expectations.
  21. Unfortunately, the Gemini video demo was faked. They actually fed Gemini a few images, not video. Also a few of their benchmarks were basically rigged in favour of Gemini. For example: the MMLU benchmark where they claim Gemini surpassed human expert level at multi-domain question answering? They basically allowed Gemini to have 32 intermediate steps before getting it to give the final answer. And then they compared this to GPT-4 who was not given any.
  22. The only way to find out is to try it. No matter how much reading you do, no matter how much you know about other peoples experiences, you will know nothing about what DMT does to YOUR system. It affects everyone differently. You can even feel the way in which it meshes with your particular genetics.
  23. A funny observation about this forum is that a lot of people copy Leo's writing style. Anyone else noticed this? Leo tends to speak in very blunt sentences. Which means he sounds like this.
  24. On psychedelics when I lost my phone I felt like I lost a part of myself. As if I had lost my arm.