zunnyman

AI Safety: How do we prepare for the AI, GPT, and LLMs revolution?

92 posts in this topic

@aurum When it comes to actually stopping everyone internationally I think this is what is required (this is nowhere near specific enough, this is just the layout/structure of what I think is required):

I will copy paste from, what I wrote in the "should we shut down all AI" thread.

Quote

There is only really one way to do this (to actually make sure everyone will participate, including all countries), but that thing haven't really been discovered yet. It could be called  a reversed multipolartrap or a positive multipolar trap, where you invent a new method/tool that is the most effective in the dynamics of game A, but at the same time moves us towards game B or has game B characteristic in it. Because it is the most effective, people will automatically start to use it, if they want to stay competitive.

So for instance, in the context of politics (this might or might not be true, I will just use this for the sake of an analogy and demonstration of this concept) if a transparent government model is more effective than other ones, and different countries start to see that, they will eventually need to implement that model, because other governments who implement that model will start to outcompete the ones that don't use this model. Because of that pressure eventually everyone will use that model, but for example  - because of the transparency - these new models could start to change the global political landscape in a way that start moving us towards game B.

Now is that true that a more transparent government model is more effective than other ones? we can argue about that, thats not the point I try to make here, the point is to 1)find/create a method or tool that has inherent qualities in it similar to game B or at the very least has the potential to change certain game A systems internally to move us towards game B, and 2) at the same time so effective in the current game A world, that people will be incentivised to use/implement it, beacuse people will  see that by using ot, they will get short term gains with it.

In the context of this discussion, the challenge would be for smart AI researchers to find/create a new research method that is opmitized for safety but at the same time one of the most if not the most effective and cost efficient method to progress AI (I have no idea if this is possible or not, Im just saying what I think is actually required to make everyone participate [including abroad]).

The bar for the solution here is incredibly high and I obviously wouldn't say that it is anywhere near realistic, but unfortunately I think this is what is required.

Very shortly, we need new  AI research tools/methods that every country,company is incentivised to implement/use ,but at the same time safe as well.

Edited by zurew

Share this post


Link to post
Share on other sites
On 03/04/2023 at 5:56 PM, Danioover9000 said:

@Israfil

   A female AI??? Freudian slip?

Intelligence is a feminine noun in Portuguese. Grammar slip.

Share this post


Link to post
Share on other sites

@Israfil

12 hours ago, Israfil said:

Intelligence is a feminine noun in Portuguese. Grammar slip.

   Is that before or after the sex?

Share this post


Link to post
Share on other sites
30 minutes ago, Danioover9000 said:

@Israfil

   Is that before or after the sex?

Is that a joke or a genuine question? Not sure how to respond to this hahahahahaha

Share this post


Link to post
Share on other sites

I have realised we can't. They're even automating AI improving itself. There's no human engineering AI, AI does it to itself. So you will never be enough in front of an AGI AI. At most you can switch to jobs with human touch involved 

Share this post


Link to post
Share on other sites

In what way does GPT-4 show sparks of AGI? It's still dumber than an amoeba at most things. It doesn't even know how to pick up a pencil.


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites
28 minutes ago, Carl-Richard said:

In what way does GPT-4 show sparks of AGI? It's still dumber than an amoeba at most things. It doesn't even know how to pick up a pencil.

It can figure out new stuff, it hasn't been programmed to do.

For example, some version of GPT was only trained on English data and at some point randomly taught itself Persian. And GPT-4 is apparently better at chemistry than any human now, although it was never formally taught that.

I guess we will have to see a lot more stuff like that, but that at least shows me, that were heading in the direction of AGI.

I find the "intelligence explosion" hypothesis rather compelling - it's already virtually impossible to follow the advancements and breakthroughs, unless you skim the literature on it everyday. Once it figures out how to rewrite it's own code and stuff like that, were in for one hell of a trip.

Edited by Nilsi

“We are most nearly ourselves when we achieve the seriousness of the child at play.” - Heraclitus

Share this post


Link to post
Share on other sites
9 hours ago, Squeekytoy said:

This Bashar guy is pretty dang obnoxious imo. -_-

Sometimes he says some New Age stuff that causes me to raise an eyebrow. But many of the takes I’ve seen from him are very solid.

Regardless, Bashar is just a jumping off point. This conversation is really not about him.

 


 

 

Share this post


Link to post
Share on other sites
12 hours ago, Squeekytoy said:

Isn't that the movie where humanity got epically pwnz0r'd by A.I... 

What were their solutions?

Systems-thinking was a big one. When you become enlightened and intuitive, you see how systems work. Then, you can predict what AI is going to do! Purely using your intuition. Then, you can outrun it and contain it. 

Another big point mentioned is that AI relies on information that humans give to it. Whereas humans don't 'get' information from the outside world. Humans create their own information!! Our mind is constructing our own reality at all times. So, the day AI runs out of information it can take in, it will start to crash. Then, humans will be able to trick it. And then humans will be able to take it down. 

The final point that was made was - persistence. Humans are desire-driven creatures, whereas AI is purely rational. So, when AI fails at something, a machine-learning algorithm will look at why the failure happened. And, if the reason is fundamental to its identity, it will turn that into a limiting-belief. This won't make AI feel bad. But, because humans are desire-driven, limiting-beliefs feel bad. So, humans will see all of that, take a step back, improvise (which is creativity, which AI doesn't have), challenge the limiting-beliefs, question them and create new, positive beliefs! Because this is how the human mind works, humans can persist. Persistence seems irrational from the perspective of a machine-learned AI. It will be dogmatic about probability and statistics. Humans won't be. Cuz we have an intuition that tells us what's possible. And we have a desire to make it happen. And, humans will simply not give up until we get a break through in doing what we want to do. AI must give up at some point, because of this fundamental flaw. 

Edited by mr_engineer

Share this post


Link to post
Share on other sites

@mr_engineer

8 hours ago, mr_engineer said:

Systems-thinking was a big one. When you become enlightened and intuitive, you see how systems work. Then, you can predict what AI is going to do! Purely using your intuition. Then, you can outrun it and contain it. 

Another big point mentioned is that AI relies on information that humans give to it. Whereas humans don't 'get' information from the outside world. Humans create their own information!! Our mind is constructing our own reality at all times. So, the day AI runs out of information it can take in, it will start to crash. Then, humans will be able to trick it. And then humans will be able to take it down. 

The final point that was made was - persistence. Humans are desire-driven creatures, whereas AI is purely rational. So, when AI fails at something, a machine-learning algorithm will look at why the failure happened. And, if the reason is fundamental to its identity, it will turn that into a limiting-belief. This won't make AI feel bad. But, because humans are desire-driven, limiting-beliefs feel bad. So, humans will see all of that, take a step back, improvise (which is creativity, which AI doesn't have), challenge the limiting-beliefs, question them and create new, positive beliefs! Because this is how the human mind works, humans can persist. Persistence seems irrational from the perspective of a machine-learned AI. It will be dogmatic about probability and statistics. Humans won't be. Cuz we have an intuition that tells us what's possible. And we have a desire to make it happen. And, humans will simply not give up until we get a break through in doing what we want to do. AI must give up at some point, because of this fundamental flaw. 

   Are you using ChatGPT for this reply?

Share this post


Link to post
Share on other sites

@Danioover9000 No, I'm not. I don't use that shit. 

I find it interesting that you thought that, though. What made you suspect that?! 

Share this post


Link to post
Share on other sites

@mr_engineer

46 minutes ago, mr_engineer said:

@Danioover9000 No, I'm not. I don't use that shit. 

I find it interesting that you thought that, though. What made you suspect that?! 

   It's a similar writing structure to the chat bot, you introduce your issue, spend 3 or so paragraphs, listing explanations and maybe evidence, and finally it concludes very similarly to the intro.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now