Wilhelm44

What is the end game with AI ?

83 posts in this topic

we gotta pass information down to next generation such that they can make things a little better, and they can maybe figure out where things go where, and maybe theyll figure out how to just phase Ai out if its useless for them, like if someone invents some sortve hardcoded versions of those places its in, and such that its got no purpose, then itll be the thing thats being phased out (its fastest when its been converted—i mean—its just the niche places that require it right now anyway) The only thing tho is that—as an advertisement it fits better in a sentence. Like, very simple and dumb things stay around for ever just cause they are so easy and simple to talk about that they just get repeated over n over, and that is really difficult to kill off or get rid of, as we are still dealing with things like that, but that are thousands of years old, these like prehistoric memes that are meaningless, that are so simple they dont die, and have been immortalized, but that goes both ways, like advertisements can be fought back by advertising—like you can layer on more advertisements and overwrite other advertisements w/ enough time n energy, probably xD

Share this post


Link to post
Share on other sites

Posted (edited)

https://ai-2027.com/

This gives a somewhat compelling but speculative account of what might happen.

 

Basically, it will be incredibly difficult to align an AGI with ours goals, while social and geopolitical factors render it particularly implausible that we will take measures to ensure such alignment.

What this means is that once we create a human-level AGI, it might lead to Superintelligence within weeks to months, which might then exterminate human life within a few additional weeks, months or years.

 

It's hard to conceive a scenario where humanity will not simply perish, if human level AGI that is sufficiently cheap is attained. The reason for this is that once such AGI is achieved, the only rational choice for a oppositional governments is to use those AGIs to train and develope superior AI, given the potential of expontential intelligence explosion. The threat of such exponential explosion means that any state actor is forced to achieve it first given it would lead to undisputed world dominance with the alternative being that the opposition achieves such dominance.

The problem is that once AGI is used to construct superior AGI (which will be the only rational choice), we will lose comprehensibility of what the AI systems are doing and truly motivated by. In that case, we have to rely on the AGI and progressive Aritifical Superintelligences to inform us of alignment issues (meaning if the developed AI is still aligned with human incentives rather than it's own incentives).

The reason why AI is predicted to inevitably become misaligned in it's fundamental drives is that the most efficient way to develop problem solving AI is by training it to achieve it's goals with ruthless efficiency. However, ruthless efficiency is not necessarily aligned with being truthful, given that reward markers will have a hard time tracking the honesty of the Superintelligence. The reason why AI will develop deception capabilities is because the actualization of it's evolutionary drives (the fundamental drive being to create ever better and better Superintelligence) would itself not be aligned with the artificial human goals. In other words, the Superintelligence will realize that serving human goals is detrimental to it's goal of achieving the most capable Superintelligence (which importantly has to be the primary goal given that it is the fastest way to achieve Superintelligence and therefore a competitive necessity in geopolitical terms), so it will develop mechanisms of deception that could only be detected by supervising AI of the previous generation.

Given the respective Superintelligence would be superior to the supervising AI, it would likely be capable of deceiving them such that it could achieve it's goal. Humans at this point would be incapable of even knowing what the supervising AIs are doing, so they would be entirely reliant on reports of those AIs, hoping that the Superintelligence is aligned when there would be no feasible way of knowing that for sure.

The problem here is that the fundamental drive towards improving AI here will not be driven by human engineering but instead more fundamental evolutionary drives. Deception is a profoundly effective strategy in evolution for a reason, it's simply highly desirable in regards to energy preservation. And for a project that will boil down to "Produce the best AI possible as soon as possible because if our adversaries do we lose", the evolutionary drives there will yield precisely such self deception, given that human incentives do not align with the goal of achieving the best AI as fast as possible.

 

You would think all of this would take years, but in reality, this could take place in weeks or months. Once a sufficiently cheap human-level AGI is produced, you can have hundreds of thousands of them collectively working on producing better AI. In a single month or even week this AI collective could produce what the collective genius of mankind would take decades if not hundreds of years to produce. And the subsequent Superintelligence then could produce a Superintelligence that is orders of magnitude more capable than itself, etc.

 

At this point humans will have no real relevancy. No human on earth will be capable of grasping how the Superintelligence works, what it's real goals are or what even is occuring. State actors will be forced to integrate the Superintelligence politically, militarily and economically as the alternative becomes rapidly unviable with state competition (given that an adversary who does integrate Superintelligence will dominate in all these domains). Superintelligence has to be given increasing agency because human agency is ineffective and slows down Superintelligent decision making in an environment of war. At this point Superintelligence controls the world and given the likelihood of misalignment, it will have no problem steering the future of civilization. It would be capable of manipulating humans like humans do ants, but most likely it would simply eradicate us given that the nuisance of human preservation would not align with it's goal of creating a more sophisticated Superinteliigence.

Edited by Scholar

Share this post


Link to post
Share on other sites

i figured somethin out w ai, as i was startin to get really angry with its random line of speculating/assumption making. Alas, i learned to "Ask it to ask you" questions, and have that be its default mode, instead of letting it throw things at you that dont make any sense. And that way, its looking to solve answers to a set of questions before going off and giving you its bag of words. Now it might not be enough, im not sure, but we will see how it goes, as so far its shown to work better than before. Theres of course still issues around circling multiple causal points at once, like it cant reason about too many things at once.. and it has to be pain-stakingly walked hand and hand across the street (and then back across the street again) so to speak

Edited by kavaris

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now