Sign in to follow this  
Followers 0
Nilsi

The Life Purpose Red PIll

51 posts in this topic

7 hours ago, Scholar said:

See, this perfectly encapsulates the way AI makes us mature, because it forces us to question the true purpose of our path. See, if the point of my existence is to create cute little art projects, I might be easily replaced. If the point of my existence is solely to be a research scientist, I might be replaced. 

But, if the point of my existence is to discover and explore humanity, AI is a tool.

 

If the of my existence is to discover the nature of reality, AI is a tool.

Yes, that was kind of my point.

I would add saving our species from self extinction to that list.


“We are most nearly ourselves when we achieve the seriousness of the child at play.” - Heraclitus

Share this post


Link to post
Share on other sites

@The Mystical Man @Scholar Being wise means being able to recognise threat as it is. Thats the first step that needs to be done. After that we can talk about the positive views and aspects and how good things can get,  but first we have to drop the naive notion that everything will be okay on its own.

Edited by zurew

Share this post


Link to post
Share on other sites

This AI hype is becoming like the space discovery hype. All of it is currently nonsense and very far from actualization. No need to worry about such trivia.


Foolish until proven other-wise ;)

Share this post


Link to post
Share on other sites
2 hours ago, zurew said:

@The Mystical Man @Scholar Being wise means being able to recognise threat as it is. Thats the first step that needs to be done. After that we can talk about the positive views and aspects and how good things can get,  but first we have to drop the naive notion that everything will be okay on its own.

But everyone already does this, we have been coming up with dooms-day scenario's for the past 30 years. The point is, dwelling on the negatives won't get you anywhere, you should be already thinking about how to solve these issues with AI. And if those problems are so important to you, then become involved in it, work on some project that will use AI to create a better information landscape on the internet. There you have a life purpose, instead of moaning and fear-mongering.

 

Notice how absurdly easy it is to identify these threats and problems, yet you have not proposed a single solution. Concern without problem solving is a waste of time. How many peopel do you think exist in society, let alone in this forum, who think AI will be completely unproblematic and we will not have any challenges to face? We all grew up watching terminator my dude. Of course we have concerns, but like I said, you need to be focusing on the potential and the opportunity if you ever want to get something done.

And if you can't see how much of an opportunity this is, then like I said, your pessimism and fear has eroded your ability to see the world clearly.


Glory to Israel

Share this post


Link to post
Share on other sites
21 minutes ago, Scholar said:

But everyone already does this, we have been coming up with dooms-day scenario's for the past 30 years. The point is, dwelling on the negatives won't get you anywhere, you should be already thinking about how to solve these issues with AI. And if those problems are so important to you, then become involved in it, work on some project that will use AI to create a better information landscape on the internet. There you have a life purpose, instead of moaning and fear-mongering.

I don't think you grasp the depth of this problem. The technology in a vacuum is not the issue; growth obligation, perverse incentives, and multipolar traps are the issue and unless we can ALL coordinate this together, this will blow up in our faces.

https://www.slatestarcodexabridged.com/Meditations-On-Moloch 

This is a great article on the underlying systemic issues that drive this whole thing.

Also, in what world is everybody already appropriately concerned about these things? Most people don't give a shit about anything that doesn't impact their lives today or the day after. The point is not to be cynical and depressed, but to acknowledge these realities and act accordingly.

 

 


“We are most nearly ourselves when we achieve the seriousness of the child at play.” - Heraclitus

Share this post


Link to post
Share on other sites
2 hours ago, Scholar said:

But everyone already does this, we have been coming up with dooms-day scenario's for the past 30 years.

No, the vast majority of professionals don't give a single fuck about contemplating how to prevent a fuckup scenario. Yeah maybe some layman people who don't know shit about AI and about tech they might be sceptical about it and they might come up with a lot of dooms day scenarios but professionals who could actually have a direct impact on it, don't give a fuck and even if some of them do, they can't do shit because other professionals will push it mindlessly anyway.

Also there is a huge difference between coming up with dooms-day scenario's vs taking action to prevent those outcomes from happening.

 

2 hours ago, Scholar said:

Notice how absurdly easy it is to identify these threats and problems, yet you have not proposed a single solution. Concern without problem solving is a waste of time.

Thats not my job here, my job here was to outline not to just naively believe and assume that everything will be okay on its own. I have some solutions in my mind, but i had to react to your naive positivity on this subject. Being overly sceptical or naive about this subject are both bad and not useful. We can only solve the problems that we can recognise. If we naively assume that everything is okay, then there isn't anything to prepare for or to be solved there. Identifying problems as they are and knowing whats the potential problems are , is the process that can open the gates to be able to solve these problems and to be able to prevent these stuff from escalation.

There are no easy answers here, its a systemic problem. Some of these problems can't be solved without radical changes.

2 hours ago, Scholar said:

but like I said, you need to be focusing on the potential and the opportunity if you ever want to get something done.

Not just that, focus both on the potential problems aand the potential opportunity as well. This is a not a subject where you can revert back your fuckups and mistakes, thats exactly why we have to be really careful, calculated, smart and think about this issue in a systemic way.

The vast majority of the professionals are naively pushing this subject and they are way overly positive about it to a point where  if the  'potential danger talk' comes up, they don't give a fuck about it or they change the subject or they leave the convo, or they get heated about it.

2 hours ago, Scholar said:

And if you can't see how much of an opportunity this is, then like I said, your pessimism and fear has eroded your ability to see the world clearly.

I wouldn't consider myself pessimistic about this subject, i would consider myself to be realistic and careful. I can clearly see the potential opportunities and how much an AI will be capable of both in a good and bad way. Its already much more advanced in some ways than a human, its good , its fantastic etc, but its also dangerous if its being used mindlessly.

 

2 hours ago, Scholar said:

And if those problems are so important to you, then become involved in it, work on some project that will use AI to create a better information landscape on the internet. There you have a life purpose, instead of moaning and fear-mongering.

Nice, so one shouldn't outline some potential problems if he/she can't solve it right away. Problem with people like you is that you create and give people a false sense of safety and hope.  I had to push back with the negative side to balance your overly naive positive side, this way people can see both sides of the coin and have a more whole view about this subject.

Right now what we need is to slow the fuck down and think. The last thing what we need here is to push this subject even more without thinking about the dangers in depth and about the solutions.

Edited by zurew

Share this post


Link to post
Share on other sites
28 minutes ago, zurew said:

Right now what we need is to slow the fuck down and think. The last thing what we need here is to push this subject even more without thinking about the dangers in depth and about the solutions.

I don't think that option is on the table. As long as we don't fix our underlying incapability to coordinate on a global scale, this will not slow down. 

We're basically in a giant prisoner's dilemma, where nobody really wants to defect (i.e. create AI drone weapons, off-the-counter CRISPR gene editing kits. AGI without safety measures etc.), but since we can't just all come together and agree to cooperate, were stuck in this race to the bottom.

But were digressing here. This post was not supposed to be about computer science or game theory, but about not pursuing a naive, sociopathic Life Purpose.

Edited by Nilsi

“We are most nearly ourselves when we achieve the seriousness of the child at play.” - Heraclitus

Share this post


Link to post
Share on other sites
12 minutes ago, Nilsi said:

I don't think that option is on the table. As long as we don't fix our underlying incapability to coordinate on a global scale, this will not slow down. 

Yeah, i agree. In the current system its not a realistic option. That message was mostly targeted at Scholar.

12 minutes ago, Nilsi said:

We're basically in a giant prisoner's dilemma, where nobody really wants to defect (i.e. create AI drones, AGI without safety measures etc.), but since we can't just all come together and agree to cooperate, were stuck in this race to the bottom.

Yes, the question is how can you incentivise all members to work towards the same goals? The answer will be an overly complex game B answer, that we haven't totally figured out yet. So the goal is to work towards that game B system, and help the game B guys in our own ways.

Edited by zurew

Share this post


Link to post
Share on other sites
18 minutes ago, zurew said:

Yes, the question is how can you incentivise all members to work towards the same goals? The answer will be an overly complex game B answer, that we haven't totally figured out yet. So the goal is to work towards that game B system, and help the game B guys in our own ways.

Well yeah, we first need to actually agree on this not being the time to play games, and then find a way to keep each other accountable and force transparency. But yes, these problems are not going to go away ever again, so we need to find some kind of permanent non-dystopian solution.

Edited by Nilsi

“We are most nearly ourselves when we achieve the seriousness of the child at play.” - Heraclitus

Share this post


Link to post
Share on other sites
On 7.8.2022 at 2:36 PM, zurew said:

Nice, so one shouldn't outline some potential problems if he/she can't solve it right away. Problem with people like you is that you create and give people a false sense of safety and hope.  I had to push back with the negative side to balance your overly naive positive side, this way people can see both sides of the coin and have a more whole view about this subject.

Right now what we need is to slow the fuck down and think. The last thing what we need here is to push this subject even more without thinking about the dangers in depth and about the solutions.

This is so silly lol, I was the one pushing against the overly negative view of the OP, and you are now pushing against mine because you yourself have not recognized the same intention within my posts as you have in yours, just in reverse.

You are also not really saying anything but "Let's be careful everyone, this is dangerous!". I never said you can't point out the problems, but you aren't providing any solutions, and the little solutions you are provide are more virtue signalling than anything.

 

Either way, nobody will slow down. That's just naive. AI's will be developed at a rapid speed, like a rat race. This cannot be prevented. And I also disagree about you that the current developers of many AIs somehow are completely ignorant to ethical concerns. They are not, I actually applaud them for the amount of consideration they give to ethical implications.

It might not always be motivated by the right reasons, but atleast we are seeing these decisions being made.


Glory to Israel

Share this post


Link to post
Share on other sites
46 minutes ago, Scholar said:

And I also disagree about you that the current developers of many AIs somehow are completely ignorant to ethical concerns. They are not, I actually applaud them for the amount of consideration they give to ethical implications.

Where do you see those considerations being manifestested in practice?

Btw some of them might be aware, but thats not the point. The point is that they are not incentivised to care about those ethical concerns.

46 minutes ago, Scholar said:

You are also not really saying anything but "Let's be careful everyone, this is dangerous!". I never said you can't point out the problems, but you aren't providing any solutions, and the little solutions you are provide are more virtue signalling than anything.

Do you think that message was intentioned as a solution or more of a pushback to your overly positive narrative?

46 minutes ago, Scholar said:

This is so silly lol, I was the one pushing against the overly negative view of the OP, and you are now pushing against mine because you yourself have not recognized the same intention within my posts as you have in yours, just in reverse.

Okay, then its all clear now, it seems like there are a lot less that we disagree on, lets focus on the remaining disagreements.

 

Edited by zurew

Share this post


Link to post
Share on other sites
5 hours ago, zurew said:

Where do you see those considerations being manifestested in practice?

Btw some of them might be aware, but thats not the point. The point is that they are not incentivised to care about those ethical concerns.

Dall-E 2 is a good example, they restricted people from generating human faces for some time, and still restrict people from generating pornographic content.

Here you have an article on adobes take:

https://blog.adobe.com/en/publish/2022/03/24/putting-principles-into-practice-adobes-approach-to-ai-ethics

And I don't think Google will behave much differently.

 

I also read stances by the Dall-E 2 creators that they are very concerned about misinformation that might get created with their AI, so that's another thing that they seem to be conscious of.

 

5 hours ago, zurew said:

Do you think that message was intentioned as a solution or more of a pushback to your overly positive narrative?

I think it was a reactionary response towards a position you perceived I have, and maybe because you do reject the more important point I am trying to make about the future of mankind, aswell as how to integrate oneself and aid in the progress of mankind rather than just complaining about everything. My main point was an attempt to provide value to the OP by explaining to him how evolution is related to this, recontextualizing AI as an opportunity to improve mankind will be essential if we are to face the challenges ahead of us. I don't see a lot of talk about that in general, and I think it's far more important than the concerns, which will reveal themselves to us one way or another, and which mankind will naturally adadpt to and solve.

 

 

I compared the AI with the asteroid that caused a mass extinction, so I don't know how you came away from that thinking I was trying to minimize the challenges we will face. I am trying to tell you to be a bird.

Edited by Scholar

Glory to Israel

Share this post


Link to post
Share on other sites
57 minutes ago, Scholar said:

Dall-E 2 is a good example, they restricted people from generating human faces for some time, and still restrict people from generating pornographic content.

That's not a very convincing argument. They have a monopoly thus far, so they don't have any incentive to create a function, that would likely cause some backlash. Once there is competition, someone will inevitably seize the opportunity to get ahead by allowing this and soon everybody will join the party. That's at least how these things have always gone historically, which brings it back to my point of needing to fundamentally rethink the whole incentive landscape and preventing these arms races from starting in the first place.

Edited by Nilsi

“We are most nearly ourselves when we achieve the seriousness of the child at play.” - Heraclitus

Share this post


Link to post
Share on other sites
12 minutes ago, Nilsi said:

That's not a very convincing argument. They have a monopoly thus far, so they don't have any incentive to create a function, that would likely cause some backlash. Once there is competition, someone will inevitably seize the opportunity to get ahead by allowing this and soon everybody will join the party. That's at least how these things have always gone historically, which brings it back to my point of needing to fundamentally rethink the whole incentive landscape and preventing these arms races from starting in the first place.

A convincing argument for what? I never said nobody will ever do these things, I'm just going against this weird narrative that nobody is thinking about the ethical concerns and possible negative consequences. This is clearly not true.

 

And sure if you think you can change the incentive landscape go ahead and contribute to that, though I don't think that will require special pointing out, because it will happen anyways and the problems will become apparent. Remember, we had people warn us about the negative effects of social media for like a decade, and what exactly did that do? Nothing so far.

Edited by Scholar

Glory to Israel

Share this post


Link to post
Share on other sites
1 hour ago, Scholar said:

Dall-E 2 is a good example, they restricted people from generating human faces for some time, and still restrict people from generating pornographic content.

I think this is kind of good, but i wouldn't necessarily call this an honest approach, because it is already kind of baked in laws to restrict companies from doing such moves.

Related to Dall-E 2, one real concern could be around the art,designer job market (how it will revolutionise the market, how many people will lose their jobs, what kind of new jobs could be created and how can we take care of those artist who will lose their income , what will happen with art schools and designer schools, what will happen with art and designer teachers) --> this would be one way to think about this specific issue in a systemic way and thinking ahead before the shit kicks in.

Related to GPT-3, one big concern could be related to misinformation. 1)In the future how can we differentiate between  human vs AI generated information, articles, scientific papers. 2) How can social media sites will be able to differentiate between AI operated vs human operated profiles and accounts (and how can we help them to be prepared before we let GPT-3 public), 3) How GPT-3 will make the writers job a lot less valuable, and how GPT-4 will probably totally destroy the writer job market and what alternative solutions can we provide for those people.

Related to DeepFake, how can we differentiate between faked and not faked images, videos, audio files. When it comes to talking on phone, how can we determine if we are talking with an AI or with a real person.

Related to self driving cars and trucks, what alternative job/ solution can we provide for those people who will lose their jobs in the near future (truck,bus,train,taxi drivers)

Related to the entertainment market, how can we take care of  comedians, musicians, who will probably lose their jobs in the next decade, because AI will be able to generate super funny memes, messages, videos and what not, and AI will be able to generate any kind of music in a much greater quality than a human would ever could and with a much greater efficiency as well.

In the future when most jobs will be occupied by AI, how can we wrestle with the meaning crisis, where most people lose their motivation and hope and purpose to do anything, because there will be UBI and humans will be worthless (in terms of market value and labour). So basically what artificial pillar(s) can we create that can provide the same or higher level of meaning to people than jobs and religions combined.

 

I could go on and on, but the point is that we should think how we can create a system that incentivise us and companies to think ahead and to think about these issues and try to find solutions to these problems before they occur. I know, some of these are more far away than others, but some of these problems are so big and so complex that they crave a lot of brain power and time to find solutions for them , and they will inevitably emerge, so we better start somewhere.

I also know, that some of these problems can't be addressed by only one company or agent because they are too big and some of them are collective and some of them are global issues.

So the relevant question would be how can we create a system where we can help these companies and incentivise them to think about  ethical issues, and how can we create a trustworthy relationship with them, where companies that are working on AI can safelyand willingly provide information about what tech they are working on, so the government can create systems that are directly related to solving problems that will emerge from those AI services/items.

One othee relevant question would be where can we tangibly see the companies or any government to take these concerns / issues seriously?

1 hour ago, Scholar said:

Now this one hold some weight. Thanks for this article, its good to see something like this.

 

Edited by zurew

Share this post


Link to post
Share on other sites
14 minutes ago, Scholar said:

A convincing argument for what? I never said nobody will ever do these things, I'm just going against this weird narrative that nobody is thinking about the ethical concerns and possible negative consequences. This is clearly not true.

 

And sure if you think you can change the incentive landscape go ahead and contribute to that, though I don't think that will require special pointing out, because it will happen anyways and the problems will become apparent. Remember, we had people warn us about the negative effects of social media for like a decade, and what exactly did that do? Nothing so far.

I'm with you on this. People in charge do seem to care, but they can't really escape the game theory. I don't have the answers either, and I don't want to be the kind of guy that just stirs up panic, so believe it or not, but I'm actually trying to figure these things out and make something happen. It's just not so easy and definitely can't be solved at the level of this particular issue. What's required here is basically a fundamental shift in worldview across the board, a world revolution one might say.


“We are most nearly ourselves when we achieve the seriousness of the child at play.” - Heraclitus

Share this post


Link to post
Share on other sites

@zurew

11 minutes ago, zurew said:

I think this is kind of good, but i wouldn't necessarily call this an honest approach, because it is already kind of baked in laws to restrict companies from doing such moves.

Related to Dall-E 2, one real concern could be around the art,designer job market (how it will revolutionise the market, how many people will lose their jobs, what kind of new jobs could be created and how can we take care of those artist who will lose their income , what will happen with art schools and designer schools, what will happen with art and designer teachers) --> this would be one way to think about this specific issue in a systemic way and thinking ahead before the shit kicks in.

Related to GPT-3, one big concern could be related to misinformation. 1)In the future how can we differentiate between  human vs AI generated information, articles, scientific papers. 2) How can social media sites will be able to differentiate between AI operated vs human operated profiles and accounts (and how can we help them to be prepared before we let GPT-3 public), 3) How GPT-3 will make the writers job a lot less valuable, and how GPT-4 will probably totally destroy the writer job market and what alternative solutions can we provide for those people.

Related to DeepFake, how can we differentiate between faked and not faked images, videos, audio files. When it comes to talking on phone, how can we determine if we are talking with an AI or with a real person.

Related to self driving cars and trucks, what alternative job/ solution can we provide for those people who will lose their jobs in the near future (truck,bus,train,taxi drivers)

Related to the entertainment market, how can we take care of  comedians, musicians, who will probably lose their jobs in the next decade, because AI will be able to generate super funny memes, messages, videos and what not, and AI will be able to generate any kind of music in a much greater quality than a human would ever could and with a much greater efficiency as well.

In the future when most jobs will be occupied by AI, how can we wrestle with the meaning crisis, where most people lose their motivation and hope and purpose to do anything, because there will be UBI and humans will be worthless (in terms of market value and labour). So basically what artificial pillar(s) can we create that can provide the same or higher level of meaning to people than jobs and religions combined.

 

I could go on and on, but the point is that we should think how we can create a system that incentivise us and companies to think ahead and to think about these issues and try to find solutions to these problems before they occur. I know, some of these are more far away than others, but some of these problems are so big and so complex that they crave a lot of brain power and time to find solutions for them , and they will inevitably emerge, so we better start somewhere.

I also know, that some of these problems can't be addressed by only one company or agent because they are too big and some of them are collective and some of them are global issues. So the relevant question would be how can we create a system where we can help these companies and incentivise them to think about  ethical issues, and how can we create a trustworthy relationship with them where companies that are working with AI can provide information about what tech they are working on, so the government can create systems that are directly related to solving problems that will emerge from those AI services/items.

Now this one hold some weight. Thanks for this article, its good to see something like this.

 

    Nice post. Yes, it'll be interesting to see how A.I drawing programs would effect jobs around the arts work. Not just for visual art jobs like illustrators, comic artists, animators, but also for musical related jobs like sound engineering, lyricists, music productions, and even kinesthetic types of art like dancing, martial arts and more. The next question then is, if you happen to have been training yourself up for a few years, say for example, drawing to become a illustrator or comic story board maker/pencilist or whatever, and an A.I program happens to be in that field, what do you do with the cognitive dissonance and stress from having to come to terms with your field becoming even harder to enter?

Share this post


Link to post
Share on other sites
3 minutes ago, Danioover9000 said:

@zurew

    Nice post. Yes, it'll be interesting to see how A.I drawing programs would effect jobs around the arts work. Not just for visual art jobs like illustrators, comic artists, animators, but also for musical related jobs like sound engineering, lyricists, music productions, and even kinesthetic types of art like dancing, martial arts and more. The next question then is, if you happen to have been training yourself up for a few years, say for example, drawing to become a illustrator or comic story board maker/pencilist or whatever, and an A.I program happens to be in that field, what do you do with the cognitive dissonance and stress from having to come to terms with your field becoming even harder to enter?

Exactly. There will be a lot of pissed off people, that are looking for some purpose in their lives. Already, 42% of the world population is under age 24 and with the rapid increase in population size, there will soon be a lot of young educated people that find themselves in a world that is heading toward disaster, and they certainly won't be happy about it. That's actually great news - the time is ripe for a revolution and if we don't fuck this up, it could be exactly what we need.


“We are most nearly ourselves when we achieve the seriousness of the child at play.” - Heraclitus

Share this post


Link to post
Share on other sites
30 minutes ago, Danioover9000 said:

what do you do with the cognitive dissonance and stress from having to come to terms with your field becoming even harder to enter?

Yes this is a very important and relevant question that we collectively need to think about. The time when AI will mostly overtake most of the job markets, that you mentioned above is not far away, maybe 1 decade or maybe just a little bit more far away than that. But we don't need to wait a decade to see the effects AI will create. The transitioning phase will be hard as well, we will probably see people from certain job markets migrating to other job markets and that will has its on effects on the global economy. The problem comes when we and the government and companies don't think ahead and this "job migration" happens in a chaotic or in a random way.

@Nilsi is also super right about people who are occupying the job markets that you mentioned above, namely: they should  start planning and think ahead because their ideal LP will be overtaken by AI in the near future, so why would you put thousands of hours in a field that you can't do for much longer.

Edited by zurew

Share this post


Link to post
Share on other sites
19 minutes ago, zurew said:

@Nilsi is also super right about people who are occupying the job markets that you mentioned above, namely: they should  start planning and think ahead because their ideal LP will be overtaken by AI in the near future, so why would you put thousands of hours in a field that you can't do for much longer.

It's the same shit that happened with the emergence of the internet all over again, when the old folks were claiming "CDs/the newspaper will never become obsolete!" - only this time the implications are potentially much more serious.


“We are most nearly ourselves when we achieve the seriousness of the child at play.” - Heraclitus

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0