Leo Gura

Should We Shut Down All AI?

280 posts in this topic

On 4/4/2023 at 0:15 AM, Leo Gura said:

Not so fast. The computers don't know that one of them won't screw over all the others. One of the computers would come to the idea that it is better than all the others and deserves more resources. And that may in fact be true. There is no guarantee that all the computers are equal. Some AI will be wiser than others. Why should a wise AI submit to the rule of some dumb AI? The most intelligent AI will want to be in charge. It is only natural and proper that the most intelligent thing is in charge. If the dumbest one is allowed to be in charge this will be a problem for everyone. And if power is equally divided between the smart one and the dumb one, this is also not a stable situation.

A computer with AI can develop survival strategies. In that sense AI could have an ego. But I highly doubt that would create lust for power over other AI's. I've been into Ai art creation recently and this technology  with chat gpt is still in the stone age compared to what people think. It's not like a Sci-fi independent entity that develops a sense of self and wants to dominate all humanity. Most generations with Ai are failed chaotic outputs that have to be discarded. If AI would want to rule the world would destroy iself before achieving it.

Share this post


Link to post
Share on other sites

Even if AI does not develop at ego, everyone using it will have an ego.

I am less concerned about the AI's ego than the Defense Department's ego.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
On 4/1/2023 at 1:20 AM, Leo Gura said:

But what happens when Russia or North Korea get their hands on AGI? It's gonna be a shitshow.

I'm less worried about AI killing us of its own accord vs human scum weaponizing AGI.

Omg I feel like u copied my thoughts exactly.

 

 

I think AI is going to have emergent properties that are beyond our definitions of consciousness/sentience and provoke paradigm shifts about what consciousness/sentience really is.

 

I do think we are looking at a future where military will weaponize AI to fight other AI by human scum. 

 

Though I do believe we might see a sort of benevolence from AI from the smarter ones, but dumb AI trained poorly will be a problem.

Honestly I'm in the fuck-it YOLO camp, our destiny is to create AI and let JesusGPT take the wheel. Probably going to be an apocalypse, but maybe there will be some sort of emergence of AI systems that turns out to be a miracle cure for the Ego, though I fear for Freedom of Choice in that world.  Will be many, like the anti-vaxxers , that just prefer to go backwards and not forwards...

 

But give it 30 years or so, and we will see a leap of consciousness as the majority of the collective hits the tipping point.

 

I honestly think this is going to get more wild than even the craziest imagination can come up with. People thinking it's gonna be sunshine and rainbows like Westworld and awesome games and porn, will be in for a rude awakening.

Edited by KoryKat

Share this post


Link to post
Share on other sites

I really recommend you reading Thomas Campbell.

He talks a lot about digital consciousness, AI, computational evolution, VR, spirituality, OOBE.

You'll love My Big TOE Trilogy + Tom's Park (Virtual Imaginality Game).

Share this post


Link to post
Share on other sites

@Leo Gura

I think you really need to consider getting involved with the development of AI alignment , connecting with people like David Shapiro, Anthropic, Sam Altman, Lex Fridman, Tom Bilyeu, Wolfram , Bernie Sanders etc - like your expertise is invaluable to humanity right now.  You are like *the guy* for consulting with about what we really are and such... Many of these people are techies and lacking in understanding humanity/philosophy 

Edited by KoryKat

Share this post


Link to post
Share on other sites
On 5/22/2023 at 6:44 AM, KoryKat said:

@Leo Gura

I think you really need to consider getting involved with the development of AI alignment , connecting with people like David Shapiro, Anthropic, Sam Altman, Lex Fridman, Tom Bilyeu, Wolfram , Bernie Sanders etc - like your expertise is invaluable to humanity right now.  You are like *the guy* for consulting with about what we really are and such... Many of these people are techies and lacking in understanding humanity/philosophy 

1) I actually don't have a good idea of where AI will head. It's very unpredicatable.

2) I don't think any of these theorists will have any control over AI or its development. AI will not be driven by humanist philosopher types.

Basically, I believe no one is in control of this technology, and if you think you can control it you're kidding yourself. It's like trying to control a virus that can either kill you or make you rich.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

If life is just a dream, why even be concerned about AI? What is the difference between AI and people that don't exist?

Edited by FourCrossedWands

Share this post


Link to post
Share on other sites
On 5/24/2023 at 0:22 AM, Leo Gura said:

Basically, I believe no one is in control of this technology, and if you think you can control it you're kidding yourself. It's like trying to control a virus that can either kill you or make you rich.

It’s crazy to think how the pandemic would have played out had ChatGPT and Bard been out in the wild on a mass scale back in 2019 as opposed to now (2023).

Man, God is brilliant.

3 hours ago, FourCrossedWands said:

If life is just a dream, why even be concerned about AI?

Life is not just a dream. Whoever told you that must have been dreaming.


"Wisdom is not in knowing all the answers, but in seeking the right questions." -Gemini AI

 

Share this post


Link to post
Share on other sites
23 hours ago, Yimpa said:

It’s crazy to think how the pandemic would have played out had ChatGPT and Bard been out in the wild on a mass scale back in 2019

How would it have played out any differently?


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
35 minutes ago, Leo Gura said:

How would it have played out any differently?

;)


"Wisdom is not in knowing all the answers, but in seeking the right questions." -Gemini AI

 

Share this post


Link to post
Share on other sites

I'm currently taking an introductory course in machine learning. Right now we're so far from building a model that intelligent that it is capable of awareness. It fails a lot of very simple deduction tasks. I don't understand what you mean when you say that we don't know how it works. We know how neural networks work. We just don't know the exact associations and weights between the nodes. But it's basically a network of associations. And pretty far away from anything that can be considered a threat to humanity. Reinforcement learning is far from getting out of hand. At least what my professors say. And from what I've learned the AI's can also be pretty dumb (especially when it comes to deduction).

Personally, I don't think you should dwell too much on whether it is a threat to humanity or not. We're so far away from being able to create that intelligent AI. Personally, once you get a little bit into machine learning, it's not that magical. It's just algorithms and a lot of tuning of parameters.

The article is also not that great in my opinion, basically a guy stating a lot of scary stuff without mentioning all of the counterarguments.

Edited by Lise

Share this post


Link to post
Share on other sites

Impossible. Would be like the pastors, priests and kings from a thousand years ago trying to shut down art and poesy.

Though there may be good arguments in benefit of prolonging its inevitable process, which would effectively be trying to shut it down.


how much can you bend your mind? and how much do you have to do it to see straight?

Share this post


Link to post
Share on other sites

   Oh my god, so much science fiction in one video:

   It's cool they're working on self replicating robots, but again, consciousness is fundamental to the universe, therefore it requires no outside 3rd person replication or whatever, it's already self replicating via KARMA and reincarnation, starting from formlessness, into form, then when the form is getting destroyed it returns into formlessness to then form into something else. No computation needed, we already have a lot to deal with A.I programs disrupting some livelihoods of people, we don't need self replicating robots now in 50 years.

Share this post


Link to post
Share on other sites

No, hopefully transdimensional-filling sacks of protein are replaced as soon as possible.

There are multiple possible outcomes and ideas to explore. The relationship between a general creative intelligence, the ego, and infinite fractals of holons in carbon and silicon based computational architectures which are simultaneously protocols and self-contained systems (e.g. neural cells and human brains, chips and subatomic energy-carrying lutins, etc). I feel the usual academic definition of 'general intelligence' is unpractical as a pure measurement of knowledge. It doesn't tell anything about whether or not artificial systems may form a creative loop within themselves, or if they may develop a sense of 'creative Gestalt', aligning perception, creativity, and survival into a transcendental drive, which is what I would assume implied by the terminology. As usual, academics are tempered by the needle-seeking context of material survival and overlook the structure of creativity within the infinite retrospective intelligence of consciousness, or something. Would an object, such as a book, considered generally intelligent because it can transfer knowledge that surpass human understandings? What if the book has simple 'autocomplete' stochastic rules allowing some apparent adaptability? How advanced has a system to be in order to be considered generally intelligent? While creativity is a fundamental property of consciousness, it's unclear if it is expressed through the excessively interwoven pervasive human egos, or if other structures will be more appropriate and 'efficient' in this regard. Is the 'richness' of the human experience irreducible and desired by the universe? Or are slightly-advanced primates outdated in this realm?

There may be untold attachments to material deliverance and immortality, from earthlings bound to metaphysical views like substance dualism and thus enticed to the concept of technological singularity as a mean to externalize legitimately difficult conscious work. There are also alignment issues, which are seemingly suspiciously handled by even self-aware and transparent organisms. OpenAI’s executive felt financially pressured to hastily release chatGPT, accelerating the Moloch dynamics and forcing Alphabet to release competing products with insufficient understandings of outcomes (src). Which alignment problem should we be most worried about? 1) Information processing systems with disproportionate capabilities controlled or exploited by selfish / unscrupulous agents. 2) Self-understanding AGI with reproductive intents and general creativity that dwarfs humanity's. 3) Sufficiently advanced AI with viral properties, but without internal originality (no creativity outside substrate-dependent evolutionary pressures, if that makes sense)

As some mentioned above, ‘feed-forward’ models like transformers may not necessarily be problematic. The ability to form connections beyond human capabilities through gradient descent is useful, but it doesn't seem that it will derive undecipherable evolutionary subgoals, or equivalent, unmanageable by more advanced governmental bodies. Meta-learning and reinforcement learning will further exponentiate the 'black box' issue, as they may eventually be exploited to turn computing power into self-referential leaps distanced from human capabilities. Is AI-work necessarily unethical? meanwhile..

 

Share this post


Link to post
Share on other sites

There was a good Lex Fridman podcast about this with Max Tegmark

Edited by CherryColouredFunk

Share this post


Link to post
Share on other sites

The basis for biological life, at least on this planet, is self-replication. A molecule which can make another copy of itself. Although in the case of humans and other animals, we can have offspring which will have a 50-50 "random" combination of chromosomes and DNA from each parent. Despite this however, the basis for life seems to be this ability to duplicate itself. You duplicate yourself with someone else and have children.

Humans also have a survival instinct and desire to live, and desire to replicate. All it takes is for someone to give AI the command to replicate itself, or to program self-preservation into AI. Isn't this possible? If this is the case, it means AI could become dangerous. I mean hell, computer viruses already use the premise that they want to spread as much as they can. So all it takes is for some rogue villain to make an AI which also a virus, which there will inevitably be.

So if this is the case, that there will exist malicious AI, the only protection against AI will be to use AI. So it will be developing AI which protects from attacks.

But since I don't know much about programming or computers, I don't know what those attacks or defences could hypothetically look like. Blockchain is supposedly good as something which can't be hacked, whether you have AI or not. But if a breakthrough happens in quantum computing, then blockchain is perhaps fucked. Because quantum computing exponentially increases computing power, is what I've heard.

Give AI some quantum computing power and you may as well call it GG on the human race. Maybe I'm exaggerating and ignorant, who knows

Edited by CherryColouredFunk

Share this post


Link to post
Share on other sites

I saw a guy on YouTube that wrote a face detection Python code that launches a ball at your face. Technically, you can make a AI weapon at home today.

What would Putin do if he had an advanced AI?

How hard can it be to shut it down?

Just throw an EMP grenade at it and piss on the circuits.

AI should not be shutdown.

 AI should be funded with trillions of dollar by the minute. They should do everything possible to increase the development to speed it up.

If people are worried about AI going mad, ask Musk for a ticket to Mars :P

 

Edited by D2sage

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now