erik8lrl

AGI is coming

187 posts in this topic

1 hour ago, erik8lrl said:

Their goals for advancing tech and helping humanity are

I don't think so. It's hard to operate in the AI industry without a solid profit motive. Not that it matters.

Either way I am happy with more AI tools available to me. What concerns me is they putting guard rails on AI because of woke stuff and now it's doesn't work as it used to.

Even now chat gpt is not nearly usable. It's giving trash responses and less relevant information and declining answers without explanation. Another instance of why socialisms/communism/wokeism would never produce the intended outcomes. 

Share this post


Link to post
Share on other sites
On 17/02/2024 at 9:24 AM, erik8lrl said:

 

The naysayers on the dangers of AI are naive and blinded.

Yes, AI largely right NOW is benign. But this doesn't excuse bad actors. Bad state actors or private bodies developing things without public knowledge. All other computer science has advanced over the years. So why won't AI? 

Comparing the dangers to nukes, which some do, is equally as cringe. Nuclear weapons are no doubt destructive. But they only work via human agency and action. There are also non-proliferation treaties to stop the spread of nuclear weapons. AI is way harder to prevent being stopped. How can one stop spreading coding or software development knowledge being spread? A nuke cannot fire itself. An AI, even a benign one like Chat GPT, can literally function independently. 

As a species, we need to put the halt on it. Even sign a treaty at the UN level limiting their development. And even get North Korea or other pariah stats to sign it. They signed the nuclear non-proliferation treaty, so there is hope. 

Share this post


Link to post
Share on other sites
45 minutes ago, bebotalk said:

How can one stop spreading coding or software development knowledge being spread?

Why would you even want to stop it? It's so ridiculous. We need to build zero trust systems that could operate without being fooled by a freaking AI. 

47 minutes ago, bebotalk said:

As a species, we need to put the halt on it

"As a species"

Do you even realize how silly this sounds?

Do you think China/Russia is simply going to put a halt because someone fearmongers about it?

Do you think even the companies in US are going to stop it? Stop the fear mongering for a moment .

Whatever problems we face, we will solve it then. Just like we always did it.

Share this post


Link to post
Share on other sites
48 minutes ago, bebotalk said:

The naysayers on the dangers of AI are naive and blinded.

Yes, AI largely right NOW is benign. But this doesn't excuse bad actors. Bad state actors or private bodies developing things without public knowledge. All other computer science has advanced over the years. So why won't AI? 

Comparing the dangers to nukes, which some do, is equally as cringe. Nuclear weapons are no doubt destructive. But they only work via human agency and action. There are also non-proliferation treaties to stop the spread of nuclear weapons. AI is way harder to prevent being stopped. How can one stop spreading coding or software development knowledge being spread? A nuke cannot fire itself. An AI, even a benign one like Chat GPT, can literally function independently. 

As a species, we need to put the halt on it. Even sign a treaty at the UN level limiting their development. And even get North Korea or other pariah stats to sign it. They signed the nuclear non-proliferation treaty, so there is hope. 

Yeah, regulations are needed for sure. 

Share this post


Link to post
Share on other sites
29 minutes ago, Bobby_2021 said:

Why would you even want to stop it? It's so ridiculous. We need to build zero trust systems that could operate without being fooled by a freaking AI. 

"As a species"

Do you even realize how silly this sounds?

Do you think China/Russia is simply going to put a halt because someone fearmongers about it?

Do you think even the companies in US are going to stop it? Stop the fear mongering for a moment .

Whatever problems we face, we will solve it then. Just like we always did it.

Why not? It's something that affects us all, as humans. So yes, as a species. They have signed nuclear non-proliferation treaties. Despite being Western "enemies", it doesn't mean that we cannot find common solutions. We do on climate change, and China is leading some efforts in this regard. As it's a global problem, then ideally we should find a global solution for it. Solving problems "when we meet them" is myopic. We often solve problems we cannot foresee. Like global warming for instance. 

You're pushing the common narrative that any fear is overblown. We don't and shouldn't solve any problem in life without being proactive. 

 

Share this post


Link to post
Share on other sites

A Terminator-esque scenario is unlikely, for now. But then we could in decades to come get a Data-esque android. There has even been talk of sex robots that are essentially sapient. If that is ever produced, then would they have rights? they would have consciousness, like we do. we can't say we should have rights by merely being human. 

Even computer scientists in the 1960s could scarcely imagine computers today. With such exponential growth, then who knows what is possible? 

Putin in his interview even spoke about the dangers of AI. He's not stupid. He knows the threat to his country at the least. 

This deepfake porn shit is disturbing too. The Trump arrest pics were funny asf but then anybody can create that about anybody else, and for what ends? Voices can be faked too. So yes, as a species, we need to have a hold on this. it affects us all. 

And enemies can't form common bonds, ever. OK. Explain then why the USA and USSR in the 1960s (during the Cold War no less) signed nuclear proliferation treaties that still stand? It's not as black and white and limited as you are making out.  Assuming that AI will always just be limited to Chat GPT level technology, or can never be a threat, is exceedingly myopic and dangerous thinking.

Edited by bebotalk

Share this post


Link to post
Share on other sites
3 hours ago, bebotalk said:

Explain then why the USA and USSR in the 1960s (during the Cold War no less) signed nuclear proliferation treaties that still stand?

Exactly. The treaty was signed to prevent other nations from developing nuclear weapons. And what happened? India, Pakistan, North Korea and even Israel developed nuclear weapons anyway. Even US and Russia withdrew from the treaty. Which is why all such treaties are utter trash.

The exact same thing will happen with AI, now that corporations are playing the game. So, they would absolutely zero flying fucks about any "treaty". If you try to regulate them too much they move to a country that does not try to regulate them and train their AI on all the data they can get their hands on.

I oppose all regulations that increases the barrier to entry to AI. Some basic regulations are necessary.

I do not doubt the sincerity or good will of your arguments. Just that it will not produce the intended effects that you are looking for.

Edited by Bobby_2021

Share this post


Link to post
Share on other sites

None of the naysayers here are disputing that AI is going to continue to impact society in a profound way. (I'd contend that anyone who disputes this point is living in denial of Reality.)

What's being disputed is that Artificial General Intelligence (usually understood to mean human level intelligence) is just around the corner.

Edited by DocWatts

I'm writing a philosophy book! Check it out at : https://7provtruths.org/

Share this post


Link to post
Share on other sites
7 hours ago, Bobby_2021 said:

Exactly. The treaty was signed to prevent other nations from developing nuclear weapons. And what happened? India, Pakistan, North Korea and even Israel developed nuclear weapons anyway. Even US and Russia withdrew from the treaty. Which is why all such treaties are utter trash.

The exact same thing will happen with AI, now that corporations are playing the game. So, they would absolutely zero flying fucks about any "treaty". If you try to regulate them too much they move to a country that does not try to regulate them and train their AI on all the data they can get their hands on.

I oppose all regulations that increases the barrier to entry to AI. Some basic regulations are necessary.

I do not doubt the sincerity or good will of your arguments. Just that it will not produce the intended effects that you are looking for.

India for one never signed the NFT. Most countries that have signed it have adhered to it. Maybe see reality for what it is, and not your US conservative lens of such, since American conservatives often think things are true withotu seeing reality. Pakistan and North Korea never signed it anyhow. 

So it has had its intended effect. You said that enemy nations cannot co-operate. this is therefore false. the world isn't as black and white as you make it out to be. Since when is any agreement perfectly executed? even treaties between allies aren't always followed. Look at NATO. keep on seeing life through a formulaic lens.

Corporations are subject to state law. If there is any international law stating that governments must prevent companies under their jurisdiction don't adhere to AI, so be it. 

Edited by bebotalk

Share this post


Link to post
Share on other sites
On 2/27/2024 at 0:48 AM, Bobby_2021 said:

I do not care about some shameless billionaire adding an extra few billion into his pocket.

Oh, it's gonna be way worse than that. These AI companies will suck up trillions of dollars by exploiting little folk who are powerless to stop them.

Where do you think that billionaire got his billions? He did it by exploiting others.

If you let them, these AI companies will enslave you. Of course it will all be done under the pretense of improving the world. Not much different than what happened with factory workers in the 1800s.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
46 minutes ago, Leo Gura said:

Oh, it's gonna be way worse than that. These AI companies will suck up trillions of dollars by exploiting little folk who are powerless to stop them.

Where do you think that billionaire got his billions? He did it by exploiting others.

If you let them, these AI companies will enslave you. Of course it will all be done under the presence of improving the world. Not much different than what happened with factory workers in the 1800s.

Let’s open a short position on Nvidia together. We might make a few bucks to win the rat race.(Not a financial advise)

Share this post


Link to post
Share on other sites
1 hour ago, bebotalk said:

Most countries that have signed it have adhered to it

Dude what I am saying is that AI treaty will not be singed by countries powerful enough to develop their own AI. They will literally ignore the treaty like India and Israel ignored the NPT. 

The countries who did sign the treaty were too weak to develop their own nukes. It's not like they had the capacity to do it anyway. 

And you do realize that even Russia violated the treaty and US pulled out of it. That's how petty these agreements are. The moment it's inconvenient for you, you can pull out of it. 

You will be shooting yourself in the foot by not developing your own AI. 

For eg Ukraine didn't develop their own nuclear weapons for honouring some shitty treaty and now they are paying a hefty price for it. Nukes could have easily prevented this sort of Russian invasion. That's what's going to happen to countries that doesn't pursue AI. They will be overshadowed by countries who do develop their own AI. 

2 hours ago, bebotalk said:

So it has had its intended effect

The intended effect was to prevent more countries from developing nukes apart from the existing ones. They clearly failed at that. 

2 hours ago, bebotalk said:

Pakistan and North Korea never signed it anyhow. 

North Korea signed and later pulled off. Because without nukes, they would they would be under the control of the US or bigger powers.

2 hours ago, bebotalk said:

You said that enemy nations cannot co-operate.

They don't have to be enemies to not cooperate. They could simply follow their own interests which is to develop their own AI. 

The laws that you make only apply to a certain jurisdiction. They would do all the same in a different jurisdiction. It's so easy to skid these laws. Which are meaningless anyway. 

Share this post


Link to post
Share on other sites
2 hours ago, Leo Gura said:

Oh, it's gonna be way worse than that. These AI companies will suck up trillions of dollars by exploiting little folk who are powerless to stop them.

If they suck up trillions of dollars, then someone has to give them trillions of dollars. Will you give up trillions of dollars for free? only if you get more than a trillion dollars in value. I will be glad if a trillion dollars of value is pumped into this economy. That would do wonders to the economy in a scale more than you could imagine.

If he is raising a trillion from investors. I couldn't care less. Let him do it. In that case he is making himself a slave by being in debt to those investors. There will be a lot of investors, which is not going to be a problem. There is a chance that none of this will pay out and the money will be locked up in research & development. That is good.

If he gets a trillion-dollar contract from the govt, then you should be concerned. Because that is the taxpayer money of the people. Government contracts are scary. In that case, you should blame the government for empowering trillion-dollar corpos. 

If the "little folks" can get a share of the trillion dollars in value, then that would empower the little folk. You can use Ai without having a single penny. You should look forward to increasing competition in the market by having more players and easing the regulations. That way no single corpos can increase the prices inorganically. 

Stop seeing everything from the eyes of the Karl Marx. Even he would change the mind on capitalism based on whatever that is going on at the moment.

2 hours ago, Leo Gura said:

Where do you think that billionaire got his billions? He did it by exploiting others.

He got billions because you keep sending him money. You keep using his products even when you can choose not to. 

He got billions from solving real problems. 

And you use the products of people who exploit their workers. If you want to mass produce anything, you need to exploit the labor. This is altruistic exploitation and a necessary one. Do you want to exploit people and build cheap solar panels, nuclear plants and electric cars to produce clean energy or die from climate change?

The exploitation is not done from a shameless attempt to get rich. He is solving genuine problems, and the market prefers exploitative solutions by wanting cheap cars, solar panels etc. He can pay his workers more if you are willing to pay more for a Tesla. So, you are doing the exploitation by buying apple and tesla.

If it is exploitative, you can choose not to get exploited. No one is forcing anyone to work for a billionaire. 

But someone has to make the food with capital as cheaply as possible so that people can be fed.

2 hours ago, Leo Gura said:

If you let them, these AI companies will enslave you. Of course it will all be done under the presence of improving the world. Not much different than what happened with factory workers in the 1800s.

The AI companies can never enslave you on their own. They do not have the monopoly on violence. Government is the one responsible for ensuring the law and order. The best that AI companies can ever do is to get lots of paper money. That paper money is valuable simply because the government says so. 

If the government is easily corruptible, then you know where the problem is. 

The real problem is when you are squeezed out of alternatives because the government increases the barrier to entry for smaller business by increasing regulations and random stupid fees for doing business. That way you are allowed to buy services from one mega corporation. All problems of slavery come from the government not doing it job or when the government is handing out contracts worth billions to these companies or lobbying to kill small business by overregulating them. 

Most problems would instantly solve if you were not going to pester small business from providing services by making use of AI. This is how you democratize AI and put power in the hands of the people.

Share this post


Link to post
Share on other sites
3 hours ago, Heaven said:

Let’s open a short position on Nvidia together. We might make a few bucks to win the rat race.(Not a financial advise)

Lol, just buy whatever Nancy Pelosi is buying at this point, you can't lose lol
 

Share this post


Link to post
Share on other sites
6 hours ago, Bobby_2021 said:

Dude what I am saying is that AI treaty will not be singed by countries powerful enough to develop their own AI. They will literally ignore the treaty like India and Israel ignored the NPT. 

The countries who did sign the treaty were too weak to develop their own nukes. It's not like they had the capacity to do it anyway. 

And you do realize that even Russia violated the treaty and US pulled out of it. That's how petty these agreements are. The moment it's inconvenient for you, you can pull out of it. 

You will be shooting yourself in the foot by not developing your own AI. 

For eg Ukraine didn't develop their own nuclear weapons for honouring some shitty treaty and now they are paying a hefty price for it. Nukes could have easily prevented this sort of Russian invasion. That's what's going to happen to countries that doesn't pursue AI. They will be overshadowed by countries who do develop their own AI. 

The intended effect was to prevent more countries from developing nukes apart from the existing ones. They clearly failed at that. 

North Korea signed and later pulled off. Because without nukes, they would they would be under the control of the US or bigger powers.

They don't have to be enemies to not cooperate. They could simply follow their own interests which is to develop their own AI. 

The laws that you make only apply to a certain jurisdiction. They would do all the same in a different jurisdiction. It's so easy to skid these laws. Which are meaningless anyway. 

All five of the original nuclear powers have signed it and by and large stick to it. India never signed it, but so what? Which law has perfect application? It's better to have some regulation if it's not perfect. life isn't perfect. It was the original nuclear powers that developed the treaty. The USA still is a signatory to it, as is Russia. The consensus is that it generally has worked in stopping the spread of nuclear weapons. 

My point is you said that enemies cannot cooperate. The USA and USSR definitely were not allies at the time. and countries generally don't share nuclear weapons. It's an example that many countries can agree on given topics that affect us all. I don't see why that's taxing. Nation-states like individuals will do their own thing. Because some people still drive over speed limits it doesn't mean speeding laws are invalid. You keep asserting that because one country is acting contrary to the common interest, then it makes any and all attempts at regulation false. Since when is life that clean-cut? Better that there is some regulation over no regulation. I don't know why you think everything must have perfect application. 

From what both Xi and Putin have said in the public domain, they see the potential dangers of AI. So maybe there is a chance for them to co-operate. There are global climate change treaties between "enemies" that most are committed to.  Why do you pose as smart but don't get that there aren't always perfect solutions to things?

https://www.state.gov/nuclear-nonproliferation-treaty/. Your own country hasn't pulled out at all. 

I don't see the fundamental difference between this and a prospective AI treaty. Both are threats to us all. It's better to have some regulation over no regulation at all. Your view imho is myopic and dangerous, and based on fallacies and blunted thinking. 

Edited by bebotalk

Share this post


Link to post
Share on other sites

@Bobby_2021 You have a naive view of how capitalism extracts money out of people. It's not just because you buy stuff at the store.

For example, all these large tech companies have made billions by selling your private data which they effectively stole from the poplution without consent. It's very hard to make billions without cheating someone in the process.

Edited by Leo Gura

You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now