-
Content count
834 -
Joined
-
Last visited
Everything posted by erik8lrl
-
Yes, I think it's more so an AGI that can generalize most human knowledge without a conscious agent. Such AI could democratize intelligence/expertise, which will impact society greatly. Imagine anyone having a life coach AI on the same development level as Leo lol. Or any other form of expertise.
-
Very cool! Yeah, it's not perfect, we'll see how it develops.
-
AI might have processed far more data than the average human on a specific topic, but data alone is not what makes the emergent properties appear. It is the number of parameters that a model has that determines it. Parameters are like the number of connections/synapses in a brain. AI currently has 500 times less connections/synapses compared to humans. In my opinion, judging by my experience with LLM and other neural networks, it seems obvious that as the number of parameters increases, the models become more "intelligent" and the degree of generalization it's capable of increases. This opinion could be wrong since we don't truly know how a neural network of this scale/or even our brain works yet. I would define generalization ability as the ability to make connections and recognize patterns, the more connections and patterns you can recognize, the more general and intelligent you are. Which is what we are seeing happening with LLMs. I think consciousness could be a by-product of this emergent process, once the number of connections reaches a certain amount of complexity and interconnectedness. We don't know for sure of course. But I think LLMs are the early starting point of Qualia, which yes, if achieved, will truly bring general intelligence.
-
Can you tell me what question you asked? Just curious. Of course, the models are not even close to perfect. If it can generalize perfectly to everything then we would have AGI. But this space is developing fast, try it again when GPT5 comes out. Also even if it doesn't have the ability to solve puzzles or its semantic understanding is off and misses details, it can still be good and useful for many different applications. It all depends on how you use it.
-
To seek deeper understanding I had this conversation with GPT4. https://chat.openai.com/share/567c8b03-a7ad-45c1-bd54-8a5f2b0db1c7
-
https://blog.google/technology/ai/google-gemini-ai/#performance This is a breakdown of Gemini 1.5. You can try it here, 1.0 is free I think which is close to the same ability as GPT4, and 1.5 is in early access and not publicly released yet. https://deepmind.google/technologies/gemini/#introduction
-
Have you tried GPT4? With GPT3 this problem is obvious. But with GPT4, it's already much better. The AI is not just applying the right patterns. The fundamental function of a neural network is that it's not linear, none of the responses are hardcoded, instead they are all results of emergent processes. This is why they can synthesize novel responses too if you ask them to. Of course, prompting is important if you want to get the best result possible, but with GPT4, it's already at a level of semantic understanding that generally speaking, anything you give it it will understand. This will only improve as it develops, I would expect some major breakthroughs in LLMs this year, as we have already seen with Google's Gemini and soon GPT5. I don't think people understand how big of a deal Gemini's 1 million context window is, it basically means it can remember and reason with a large/holistic and long-term understanding of any problems. For example, it will improve AI's ability to code dramatically, since it can understand and remember an entire code base/system structure as a whole, to then be able to write code that is within context and best for the system it's trying to build.
-
https://x.com/NandoDF/status/1759148460526219383?s=20 https://x.com/DrJimFan/status/1759151032570093703?s=20 A good argument for the emergent property being the starting point of/similar to Life from Google AI research Lead.
-
https://x.com/elonmusk/status/1759144877567205570?s=20 Tesla's new video/world generation model.
-
-
A good technical breakdown of emergent property. And how much we don't know about neural networks.
-
https://x.com/DrKnowItAll16/status/1758963126131655036?s=20 https://x.com/elonmusk/status/1758970943840395647?s=20 This is a good analysis connecting Sora's research direction with Tesla's self-driving research direction. They are both doing the same process of world simulation and generation. And his point about AGI not being agents is exactly my point, early AGI will develop a level of general intelligence without agency/self/consciousness, then it will be implemented into embodied AI/robotics, and then in the far far future, we might or might not get to conscious agents.
-
erik8lrl replied to Husseinisdoingfine's topic in Society, Politics, Government, Environment, Current Events
Russian officials say that he collapsed while walking outdoors, but didn't say the exact cause of death. We'll never know for sure. -
https://x.com/doganuraldesign/status/1758977968159035876?s=20 https://x.com/billpeeb/status/1758960998315135360?s=20 Some new features of Sora, it seems that you can control the video output with prompts to an insane degree. Totally! If turning text/image into video is super easy and anyone can do it, it will affect everything from education, art, and entertainment, to politics, and business/marketing. It's gonna be a big change with the possibility of both positive and negative outcomes for society.
-
Lol yeah, prompting is super important for drawing intelligence out of LLMs, at least for now. Because LLMs are not super good at general understanding yet, their intelligence is often obscured under semantic context for most general users. The context you set for an LLM can drastically change its apparent level of intelligence. For example, if you set a condition during the prompt for the LLMs to answer from, they will narrow their intelligence and perspective. Telling them to exhibit a perspective is one example of this: "Pretend that you are an (insert perspective) from now on when answering my questions" as a prompt would narrow their range of data connection, and make them more intelligent in certain subjects or domains. "Pretend that you are an Enlightened being from now on when answering my questions" "Pretend that you are the best neural surgeon in the world from now on when answering my questions" "Pretend that you are the best AI researcher from now on when answering my questions" Etc... There are also custom GPTs, GPTs that are trained proprietary data from other users, to achieve expertise in different fields.
-
I think it's important to point out that we don't really fully understand how intelligence emerges from LLMs yet. Yes, on the surface it's just predicting the next word from a massive neural network. But as we increase the size of these neural networks, the LLMs start to become more and more "intelligent" through emergent processes. And no one actually knows how this process happens, due to the complex nature of these neural networks. I don't think we can even 100% be sure that these models are not conscious. Yes, they don't behave with a sense of self, and you can program the system to be limited and modify the behavior of the AI. However, this emergent property of large neural networks might be the very early basis for developing Qualia. We are only at the beginning of this development, and it already exhibits near-human-level intelligence in some areas, we don't know where this will lead as we scale up. For example, GPT3 has 175 billion parameters. These parameters can be thought of as the connections between artificial neurons, rather than the neurons themselves, so they are more like synapses. The human brain is estimated to contain approximately 100 trillion synapses, so the human brain is 571 times the size of GPT3, who knows what LLMs would be if it reached the same size as humans. Of course, this won't happen in a long time. But due to the accelerating nature of this tech, even if it doesn't reach qualia, the impact it will have on humanity will be significant. Google Gemini Ultra was announced last December and it has 540 billion parameters. This means parameter size grew 3 times in 3 years, we are only at the start of the exponential curve and we don't even know how many parameters GPT5 will have, which is coming this year. And also you can say the same for humans or any living being, we also need data/input to develop any form of intelligence. We won't be able to do anything without our senses interacting with the world. Even creativity, is often the process of interaction between existing ideas and inputs. AI art today can totally generate creative and novel artworks, artworks that are not drawn from any single data but can be synthesized from a massive amount of different data to realize the wholeness of an idea. Of course, the AI themselves don't have a self or will to create yet, but humans can already produce extremely creative art by using AI. If AI has a self, I don't see why it can't be creative and artistic. But yeah, to develop Qualia is a far and unknown road down the future.
-
@Seth @Phil King Yeah, I saw this video randomly. He is clearly exaggerating by a lot. But the acceleration rate of AI is definitely alarming, which is the main point of this post.
-
I agree, we are nowhere near simulating the level of intelligence of real living beings, and that might not happen in 50 or even 100 years. What I mean by AGI in this instance is more practical. An AI that can solve novel and general problems better than an average human would be my definition of AGI. It doesn't have to have the same complexity that an organic being has for survival. Heck, it doesn't even have to know how to drive cars. I think self-driving agent is a very difficult general problem, the AI would have to reach the level of conscious being in order to truly not make any mistakes, since the range of problems/situation you could run into is near infinite due to the complexity of reality.
-
erik8lrl replied to Husseinisdoingfine's topic in Society, Politics, Government, Environment, Current Events
I see. Asked ChatGPT to do some research and give me a breakdown. Here is the result: The discourse surrounding Alexei Navalny and his political trajectory reveals a complex character whose positions and tactics have evolved significantly over time. Initially, Navalny was identified as a staunch supporter of free market reforms in the 1990s, inspired by then-President Boris Yeltsin's policies. Over time, his firsthand experience in the business world during Russia's tumultuous transition to capitalism and his observation of the systemic corruption led him to focus on anti-corruption efforts, setting a cornerstone for his political activism. Despite his early engagement with nationalist movements and participation in far-right rallies, Navalny's political stance has broadened over the years to encompass a wider anti-Kremlin sentiment, focusing more on corruption and less on nationalist rhetoric. This evolution is detailed in a thorough analysis by The Moscow Times, which highlights Navalny's shift from a "market fundamentalist" to a leader with a more nuanced approach, including advocating for liberal economic policies while maintaining a strong stance against corruption and inequality. Navalny's commitment to exposing corruption within Russia's elite has garnered him both national and international attention, transforming him from an anti-corruption blogger to a significant political figure challenging President Vladimir Putin's rule. His efforts to mobilize support across Russia's vast geography, despite facing personal risks and legal challenges, underscore his role as a central figure in Russia's opposition movement. However, his past involvement with nationalist groups and statements has also attracted criticism and controversy, leading to debates about his true political ideology and end goals. Al Jazeera provides additional context, suggesting that while Navalny has moved away from overt nationalism, he continues to advocate for some measures that could be seen as anti-migrant, indicating a complex relationship with his past nationalist tendencies. This nuanced perspective suggests that Navalny's political identity cannot be easily categorized, reflecting a blend of liberal, nationalist, and anti-corruption elements. Furthermore, the broader implications of Navalny's activism and the West's response to Russia's internal politics are explored by Al Jazeera in another piece. This analysis raises questions about the potential consequences of Putin's fall from power, the fears of Russia's disintegration, and the West's role in shaping perceptions and outcomes in Russian politics. The discussion points to a divided Russian society, with Navalny's efforts seen as part of a larger struggle for political legitimacy and change within Russia, even as the country navigates complex internal and external challenges. These sources collectively paint a picture of Navalny as a multifaceted political figure whose past actions, current efforts, and potential future impact on Russian politics are subjects of significant debate and interest both within Russia and internationally. -
This is because it is not intelligent enough yet. It will improve over time and soon. To be fair yes no one knows for sure. I'm simply posting this to bring awareness of the acceleration of this development. What will happen is yet to be seen. But due to the sheer impact this could have on humanity. I would start preparing for it to become a reality, and pay attention to this space. I think our definition of AGI is different, from what I'm reading I feel you are thinking of AGI as creating a conscious intelligence. I do not know if that would happen. My definition of AGI is more so what we have now but far better and more intelligent at solving problems or providing understanding in different domains. I think Sam Altman's definition of AGI is when AI can help solve new and novel physics problems.
-
https://scitechdaily.com/counteracting-addiction-how-alcohol-and-drugs-genetically-rewire-your-brain/ New research proves that drinking alcohol rewires your brain to become more prone to addiction in general
-
Self-driving agents are difficult to develop precisely because they require high levels of intelligence in multiple domains. Robotic and robotic agents will likely come after AGI is achieved digitally.
-
@zurew Language is the connector for different domains. The way we interface with these models is through language. Even with an image/video model like Sora, it is fundamentally an LLM as well. Multimodal LLM is the foundation of AGI, when the number of domains and level of intelligence reach a point, it will develop general intelligence.
-
erik8lrl replied to Husseinisdoingfine's topic in Society, Politics, Government, Environment, Current Events
Interesting. Can anyone from Russia actually verify what this guy is saying? -
It won't stay on an exponential curve forever, but because we are still so early in this development, it will be a while before the curve slows down. I think I define intelligence in this instance more as logic, reasoning, and problem-solving capabilities. Moral and philosophical problems are often relative and paradoxical. They are more so wisdom rather than intelligence. For an AI to have wisdom it would need a consciousness, a perspective in which it can project intelligence onto. That consciousness might emerge from AGI, we don't know. But at least for now, it seems that by simply scaling up the quality and size of the data, the models are able to increase their intelligence regardless of domain. This is why AGI might happen sooner than people expect. Honestly, I expected AGI to not happen in 5-10 years, but with the Sora release it made me change my perspective. I work in AI video production, so the leap in quality of Sora versus every other model in the market right now is unbelievable. It's like jumping from 10 to 100. It single-handedly solved almost all the problems with video generation models. I didn't expect this quality until 2 years from now, but it is here now. The thing with scaling intelligence this way is that there is no hard limitation to how large you can scale. As long as you have the resources, you can scale as large as you want, which means that the speed at which this tech develops is not limited by technical limitations, but by scale only. I'm not an AI researcher. However, it seems that if the scaling factor works for both language and image training, it will likely work for other domains and functions as well. An AGI is likely a combination of all the models trained on every domain possible, and it seems like that is totally possible. The 8 trillion dollar ask makes a lot of sense if you think from this perspective.