thenondualtankie

Member
  • Content count

    320
  • Joined

  • Last visited

1 Follower

About thenondualtankie

  • Rank
    - - -
  • Birthday 04/06/2002

Personal Information

  • Location
    United Kingdom
  • Gender
    Male

Recent Profile Visitors

2,102 profile views
  1. Hi Yimpa. Most of the GPTs are basically useless in my opinion. You get the same benefit by prompting the default system yourself. However, 'Consensus AI' is a pretty useful one because it lets you search a database of academic papers. I find it more useful to create my own GPTs as I need them. For example, I created a psychedelic assistant GPT that helps with things like testing, ROA and general knowledge around psychedelics. The main benefit is that it gets straight to the point without preceding its message with a thousand safety and legal warnings.
  2. I use ChatGPT quite heavily and very rarely reach the message limit
  3. Cool concept art. How does it perceive the world?
  4. Can't we just revolt if our big tech overlords aren't sharing the fruits of AI? Also, competition drives the cost of things to zero when there's no labor involved. If everything is automated, nothing is profitable, as long as there are competing entities doing the automation.
  5. Lol this guy on Reddit experimented with the idea of giving 'drugs' to LLMs by injecting randomness into their inner calculations. Since these calculations are layered, the randomness compounds as it propagates through the layers. https://www.reddit.com/r/LocalLLaMA/comments/18toidc/stop_messing_with_sampling_parameters_and_just/ Interactive demo: https://egjoni.github.io/DRUGS/sample_generations/
  6. 4 is way better than 3.5 but it's only worth it if you use ChatGPT substantially throughout the week.
  7. What a strange little twist! That's intelligence. Examples of GPT-4 doing this?
  8. @zurew A child knows the abstract notion of 'not', but often does not exactly follow the instruction when the 'not' is embedded into a more complex instruction. Yet a child has general intelligence. In language models, this happens because its attention heads might not look at the word 'not' enough. Funnily enough, that's exactly analogous to a child where it doesn't follow the instruction, not because they didn't hear the instruction, but because they weren't paying attention to the word 'not'. It went in one ear and out the other.
  9. Zurew, I'd like you to provide some examples of that. You have no idea what you're talking about. GPT-4 does not have perfect reliability in following instructions, including negations. This also applies to humans. It says nothing about its general ability to understand. If it doesn't understand, then how do you explain its ability to solve unseen problems? For example, as I mentioned above, it can solve a wide array of programming problems it has never come across. You could probably google more examples of its problem solving.
  10. A lot of people are in denial about AGI. The reason AGI is coming is because we've already achieved some level of intelligence via things like ChatGPT, and it's a simple scaling of this intelligence. The reason people believe that scaling will happen is because it has shown no sign of stopping. Scaling isn't going to suddenly stop working once we're just below human intelligence level. This is utter anthropocentrism; human intelligence is special and computers cannot ever reach this! Whether ChatGPT actually attains some level of intelligence: if we just focus on the 'IQ' aspect of intelligence, then ChatGPT absolutely has this. This means ChatGPT is able to solve new problems it has not seen before. For example GPT-4 is able to pass software engineering interviews using unseen questions. You can cope and seethe about how 'it's just combining things from its training data, not creating anything new', but that is literally how intelligence works: combining old information to create new. People have this massive bias in favour of 'things will always continue the way they currently are', probably because their entire survival strategy depends on it. You see this especially starkly among people like software engineers, where AI hasn't fully hit them just yet, meaning they can still get away with not facing the reality that AI will surpass their abilities.
  11. Thanks for the writeup. I've been thinking the same for myself recently: I need to create or find an ecosystem that promotes my growth and values. And this ecosystem inevitably involves community. For a long time, much of my ecosystem has been browsing the internet all day. That's something that needs to be replaced. What actions have you taken so far to create your desired ecosystem? What's your plan?
  12. https://www.reddit.com/r/interestingasfuck/comments/1akrpxk/well_didnt_have_this_one_on_my_bingo_card_tucker/?utm_source=share&utm_medium=web2x&context=3 TL;DR Tucker announced he's in Moscow and he's going to interview Putin. He says his intentions are for America to see the other side of the story. One part of me thinks this could be good to make people in the West less biased. The other part thinks that it's simply going to push people into having new biases, or strengthening their old ones. In theory it sounds like something that should make people less biased but that's not always how things like this work.
  13. Success is relative. When you are already born into a rich family, success to you is not simply being able to maintain that level of wealth. Trump is a failure in many regards, for example intellectually and emotionally.
  14. @UnbornTao I like OpenAI's definition of AGI which is an AI system that can replace humans at 'most economically valuable tasks'. A slight issue with this definition is that AGI will likely have a very skewed skillset, so we may end up with an AI that is superhuman at programming but still wouldn't be able to understand video as well as humans. So AGI could also be 'general intelligence which exceeds or matches humans at many tasks while being shit at others'. 'What is AGI' is almost like 'what is intelligence' so it's rather challenging. We could also go into its impact on society if that's what you were asking.
  15. I agree completely. AI is a hype cycle, but it's not just a hype cycle.