tashawoodfall

AI vs Enlightenment

79 posts in this topic

Elon Musk basically has been warning people that AI will be far more intelligent than humans in not many more years and that we also will not be able to control it as it will be self-improving but that we may be able to merge with AI (that's the hope assuming it doesn't objectively find humans useless or a threat).  Just wondering how this relates to enlightenment.  I haven't had an enlightenment experience.  I guess my question is, is enlightenment and intelligence beyond we can imagine (AI) related or are these things completely different?  

Edited by tashawoodfall

Share this post


Link to post
Share on other sites

people here think they are more smart 

but doesn't mind, it's fun, alien will come save us, don't bother, take a sit and have fun, no need to cry upon what isn't here ;) 

 

Edited by Strikr

https://www.youtube.com/watch?v=7sbw__MsJZ0

We know nothing, and even, I m not sure. a.V.e

 

Share this post


Link to post
Share on other sites

No crying going on here, it's a question and I have zero concern about people who think they are smarter than whatever.  Ok, yeah you wait for the aliens to come xD

Share this post


Link to post
Share on other sites

sorry I m on move of trolling today, I m taking the vibe don't take my word seriously,

will makes me draw/makes weird music then :D 

for the real question, I don't know man, AI could be in a state higher than enlightened.


https://www.youtube.com/watch?v=7sbw__MsJZ0

We know nothing, and even, I m not sure. a.V.e

 

Share this post


Link to post
Share on other sites

Interesting, I thought enlightenment was something beyond intelligence.  But perhaps "seeing the truth" is along the lines of "intelligence"  or at least somehow beyond it (where AI might be going to) idk 

Share this post


Link to post
Share on other sites

AI don't have cells. AI is run by conducting materials. Magnetic force is enough to fuck him up.

Share this post


Link to post
Share on other sites

AI will one day be great at making logical insights about reality, potentially it will be able to pick out important factors towards enlightenment that has to deal with the science side of things.  Maybe AI of the future will give you the best possible regiment that fits your specific needs and brain better than any teacher could hope to do.  That being said,  I don't see AI becoming enlightened or having insights at the deepest of levels.  Just because AI is intelligent, does not mean it experiences anything.  While it may seem intelligent and even conscious, when you look at the code of AI it's still simply data that is having one of many basic computer operations.  Even though it will be incredibly intelligent, compared to us, you will be able to at the end of the day break it down into data being taken from individual registers and being operated on.  When you break humans down to our most fundamental level, apparently you get enlightenment, when you break down AI to the lowest level you get electrons on a capacitor.  No reason to believe electrons on a capacitor are any more conscious whether or not they are a part of a system that creates something that seems intelligent.  If AI can become self-aware and tap into infinity, well then so can mine craft sheep because they are both again, electrons on one of millions/billions of capacitors 


Comprehensive list of techniques: https://sites.google.com/site/psychospiritualtools/Home/meditation-practices

I appreciate criticism!  Be as critical/nitpicky as you like and don't hold your blows

Share this post


Link to post
Share on other sites

@tashawoodfall To use the analogy Musk used in the Joe Rogan Podcast, it goes pretty much like this:

Right now, human beings have most of the intelligence. We get to make the choices and we rely on our brains to provide us the answers. But, as time goes on, we will rely more on artificial intelligence to give us what we are looking for, whether that be comfort, entertainment, knowledge, etc. The pendulum is shifting from man to computers.. and this is more obvious when you see how many people are adopting high levels of technologies, whether it be a new iPhone or using a Macbook. (Also the younger generations using iPads and losing touch with physical reality) Anyone using these items is contributing to the creation of self learning AI, which is why Musk is taking a fatalistic POV at this stage. We won't be able to control it because regulation committees will not have the option to take years to react to what AI does. That time frame is irrelevant to AI. 

Share this post


Link to post
Share on other sites
Just now, Sahil Pandit said:

@tashawoodfall To use the analogy Musk used in the Joe Rogan Podcast, it goes pretty much like this:

Right now, human beings have most of the intelligence. We get to make the choices and we rely on our brains to provide us the answers. But, as time goes on, we will rely more on artificial intelligence to give us what we are looking for, whether that be comfort, entertainment, knowledge, etc. The pendulum is shifting from man to computers.. and this is more obvious when you see how many people are adopting high levels of technologies, whether it be a new iPhone or using a Macbook. (Also the younger generations using iPads and losing touch with physical reality) Anyone using these items is contributing to the creation of self learning AI, which is why Musk is taking a fatalistic POV at this stage. We won't be able to control it because regulation committees will not have the option to take years to react to what AI does. That time frame is irrelevant to AI. 

Could you be really specific in talking about about how we won't be able to control AI and any of the effects of that you expect to see?


Comprehensive list of techniques: https://sites.google.com/site/psychospiritualtools/Home/meditation-practices

I appreciate criticism!  Be as critical/nitpicky as you like and don't hold your blows

Share this post


Link to post
Share on other sites

@Sahil Pandit I saw that podcast as well.  I remember him also mentioning that some type of death catastrophe has to happen before humans even start the process of regulating and that with how slow we move it'll be too late.  So, of course, the only hope with the way it's going is to merge with it and then he mentioned being able to download yourself into it and become infinite...but in another talk he mentions humans should put humans on both the moon and mars before WW3 so we can repopulate etc.  It's all very interesting.  

Share this post


Link to post
Share on other sites

@zambize I am not an expert in AI nor do i understand how it's going to impact business, government, etc. So all i can talk about is my observations..

It's going to start simple. 

Right now, we are in the beginning stages. Robots have a level of sophistication that is not too impressive, but still productive. For example, in stores that require you to unpack boxes from a conveyer belt, it was people's jobs to stand there and pick up boxes that needed to be moved to another area. This no longer is the case, because now there are robots that can pick up the boxes and remove it using recognition software.

Think about it: The most simple jobs would be occupied by the most people, because as you grow in competence/skill your job would be harder to replace, right ? Well by this logic, this means that once AI can automate minimum wage jobs, there is a going to be huge unemployment levels because that's where most of the people who are employed work at. 

Share this post


Link to post
Share on other sites

@Sahil Pandit Hmm yes that'll be very interesting.  by 2030 things will really start going down I think.  What a story will I have at the end of my life xD 

Share this post


Link to post
Share on other sites
12 minutes ago, Sahil Pandit said:

@zambize I am not an expert in AI nor do i understand how it's going to impact business, government, etc. So all i can talk about is my observations..

It's going to start simple. 

Right now, we are in the beginning stages. Robots have a level of sophistication that is not too impressive, but still productive. For example, in stores that require you to unpack boxes from a conveyer belt, it was people's jobs to stand there and pick up boxes that needed to be moved to another area. This no longer is the case, because now there are robots that can pick up the boxes and remove it using recognition software.

Think about it: The most simple jobs would be occupied by the most people, because as you grow in competence/skill your job would be harder to replace, right ? Well by this logic, this means that once AI can automate minimum wage jobs, there is a going to be huge unemployment levels because that's where most of the people who are employed work at. 

 

Society is dynamic, we adapt to change, or at least try to.  Netflix killed Blockbuster.  It's not like Netflix is evil, or that Netflix robbed Blockbuster employees of their jobs,  Netflix simply predicted the market better than Blockbuster and was an improvement.  If AI is widely used and replaces a lot of these minimum wage jobs, it's not like these people aren't going to adapt and look for another job, or increase their skill sets, in the same way that it's not like there are a bunch of homeless jobless Blockbuster employees walking around outside since 2013.  As AI becomes more and more relevant, it's people's own responsibility to ask themselves the long-term legitimacy of their career, not people's responsibility to reign in AI and the good it can do because people thought they could be a Walmart cashier in an age where self-checkout is becoming more and more apparent.  So you're right, people will lose their jobs, but it's the responsibility of those people to be educated and have the relevant skills to benefit society enough to get paid what they need to get paid to survive.  Think of how many jobs/slave jobs the cotton gin got rid of,  that's not a bad thing, that's why you can wear a sweater so cheaply. AI will bring it's own benefits, from surface level happiness to saving lives. I think there is this endemic fear of AI, and it's mostly perpetuated by people who let their mind fly a lot and don't fully understand the economy or AI.  Not that people need to be an expert to have an opinion, because i'm certainly not,  I just hope I convinced someone at least a bit that AI might not be something we need to fear, but should be aware of its effects


Comprehensive list of techniques: https://sites.google.com/site/psychospiritualtools/Home/meditation-practices

I appreciate criticism!  Be as critical/nitpicky as you like and don't hold your blows

Share this post


Link to post
Share on other sites

@zambize Solid post man. I like your blockbuster example. This era we are entering is going to allow creative entrepreneurs to thrive, utilizing the power of technology and online businesses.  Although the success rate will be far and few between, it'll open up new avenues that will (hopefully) shift the status quo.

However, to play devil's advocate, AI will keep expanding its levels of sophistication and competence to the point where we aren't sure what agenda it's going to set out for itself, let alone humanity at large. This will be after AI can perform expert level surgeries, automate the most advanced human skills, etc.

(sorry my ideas aren't linear in this post)

 

Share this post


Link to post
Share on other sites
4 minutes ago, Sahil Pandit said:

@zambize Solid post man. I like your blockbuster example. This era we are entering is going to allow creative entrepreneurs to thrive, utilizing the power of technology and online businesses.  Although the success rate will be far and few between, it'll open up new avenues that will (hopefully) shift the status quo.

However, to play devil's advocate, AI will keep expanding its levels of sophistication and competence to the point where we aren't sure what agenda it's going to set out for itself, let alone humanity at large. This will be after AI can perform expert level surgeries, automate the most advanced human skills, etc.

(sorry my ideas aren't linear in this post)

 

Hahaha no worries about your ideas being everywhere.

So yes,  AI will likely expand in exponential ways, much like Moore's law for computing.  Which is some exponential relationship between time and number of transistors on a chip.  I would predict that AI to some degree fits this curve.  That being said, there is this notion of AI going off on its own, and doing things we don't want it to.  Traditionally this has been people believing that there will be some kind of AI that decides the world is better off without humanity, and that if it just nukes the fuck out of them, all will be well.  However there are so many assumptions in this as well.  First of all, this would only happen with the strongest most advanced forms of AI, right?   I mean AI isn't too sophisticated right now such that it won't "turn on" or have a "mind of its own" in a sense.  But what you're saying is that maybe at some point, it will become intelligent enough that we won't be able to comprehend it's motives.  State of the art AI like this, still has to be developed and tested by the most intelligent minds within the field of AI at government facilities, universities, or large corporations.  No one else has enough funding.  My point is that people who understand the limitations and dangers of AI the most, are the people working on it.  It's my assumption that if there were even a chance of this AI developing it's own motives that go against humans in any way, that this would be addressed in the design and implementation of the AI.  Another key point is that just because something is intelligent does not give it power.  You could hate me all you want, but unless you could like curb stomp me, that hatred would just be your hatred and you would have no control over carrying out that impulse.  In the same way, we could have AI that believes it would be a good idea to nuke Russia, but we aren't going to give it the control over nuclear systems to actually fire the rockets.  That would be insane.  I mean maybe one day  AI does operate a lot of core parts of society and even war, but that will be when we have the limits of AI fleshed out and don't have to play a guessing game of whether or not AI will develop it's own motives. In summary, AI can tell you what to do, maybe better than your best self-help gurus at some point.  But we can still limit the amount of control that AI has over the decisions it makes, or just give it zero control outside of printing yes or no, that's up to us


Comprehensive list of techniques: https://sites.google.com/site/psychospiritualtools/Home/meditation-practices

I appreciate criticism!  Be as critical/nitpicky as you like and don't hold your blows

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now