Revolutionary Think

Yang vs. Sanders

132 posts in this topic

3 hours ago, Anderz said:

As for artificial intelligence always being mechanical, I believe we will have general artificial intelligence fairly soon. Yes, today AI is still narrow, yet general algorithms I think are fairly easy to develop with simple rules where the AI learns by itself in rich environments and simulations, similar to how a human baby learns through experience

We are the artificial intelligence. Algorithms can never be intelligent. We created the algorithm. 


"The greatest illusion of all is the illusion of separation." - Guru Pathik

Sent from my iEgo

Share this post


Link to post
Share on other sites
1 hour ago, TheAvatarState said:

We are the artificial intelligence. Algorithms can never be intelligent. We created the algorithm. 

Ray Kurzweil said that we always tend to raise the bar for what is considered intelligence. When Deep Blue beat the best human chess player, then people said that it's just a program, not intelligence. And when IBM's Watson beat the best human players in Jeopardy people said that it doesn't count as intelligence. And probably the same thing with AlphaGo and AlphaStar.

It's tricky perhaps to define what intelligence is. (Leo has a video about intelligence but I don't remember that he gave a formal definition). Nevertheless, emulated intelligence will be possible I think, and even as Ray Kurzweil said simulated emotions for robots that are as convincing as real emotions, and conscious behavior without being able to tell whether the AI has real consciousness or not. The AI will appear to have consciousness, intelligence and emotions. And maybe even have those things for real! There is no method today that can test for the presence of consciousness. Testing for intelligence seems like an easier task, such as using a Turing test, but people will likely dismiss even that. What is needed is a formal and valid definition of intelligence or people will argue about it endlessly.

 

Share this post


Link to post
Share on other sites

Andrew Yang has influenced Marianne Williamson who now is talking about "a tsunami of automation coming to America". Donald Trump is probably doing the right thing when focusing on job but already within a few years it will turn out that outsourcing jobs from the U.S. was a smart thing to do. Why? Because look at China today where they are starting to have robots making robots! Massive automation will be much more problematic for China than for the U.S. when it comes to technological unemployment.

Share this post


Link to post
Share on other sites
6 hours ago, Anderz said:

Massive automation will be much more problematic for China than for the U.S. when it comes to technological unemployment.

That's a short-term thinking. In the longer term, every country on earth will be impacted by automation.

It will leave impact craters.

Edited by CreamCat

Share this post


Link to post
Share on other sites

@CreamCat Yes, I was thinking about the short term effect of automation. I was surprised how advanced China already has become in terms of technology.

And the long run means only decades! Because of the exponential progress of technology. For example it didn't take 1,000 years to go from the early computer games Pong and Space Invaders to today's sophisticated 3D games. It only took decades. It took 50 years for the phone to become mainstream. It only took a few years for smartphones to become mainstream. There is a tremendous acceleration of technological development going on.

So yes, very soon, historically speaking, all countries will have to deal with automation. What may happen however is that governments put a break on AI development such as refusing to make self-driving cars legal or delaying the legal process. And also, many seemingly simple jobs humans do are actually very difficult to replace with automation (a plumber or a psychotherapist for example). The big wave of automation could be a decade or two into the future and not something imminent, politically speaking. We will see. China surprised the west with their recent progress, and the same can happen with automation.

Share this post


Link to post
Share on other sites

YangGangers... like a pack of hungry wolves who found a bone.

sigh....

 


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

where we are now, resolves of trying to heal the now by trying to solve the now through solutions for the future. transition and transformation will not happen through denial, by applying these solutions without solving the issues and problems of the past. what happens in the future can only be made through solving the issues of the now. we tend to search for simple solutions and oftentimes they are the best in sense of how they can be applied. but because something is a simple solution doesn`t mean it works for every issue. the difference between a paperclip and a safety pin is not such a big one at the first glance they both could even do the job of the other for a short time, although they are not the right solution for the speciffic problem the other solves. which one we apply really has to do with the actual problem we are looking at. in terms of progress, pumping in money alone has never solved any real issues - they talk about automation since decades so that`s where it will probably proceede further towards, but not everything can be solved through automation, even in production lines. one aspect of automation is safety and until then there is a reason to put it on hold even with a paperclip. but what i`m aming towards with this comparison is not the problem of how the issue of massive automation is solved in the future but more about how to arive at a point where these automations are even used on/at and also how they will be fueled? these questions are much more relevant for the now.

one of my personal fears about the ubi in the us is that if you guys mess this up because it`s prematurely applied it will take decades to use it again elsewhere.

using a safety pin would mean going with sanders, testing basic income in two three or even more regions/cities wherever it makes sense and working towards a major shift in awareness first. the other option would be to build a wall with that money.

Edited by remember

Share this post


Link to post
Share on other sites
17 hours ago, Anderz said:

Testing for intelligence seems like an easier task, such as using a Turing test, but people will likely dismiss even that. What is needed is a formal and valid definition of intelligence or people will argue about it endlessly.

That's because scientists don't know what intelligence is. And they won't know until they evolve out of the materialist paradigm. Intelligence does not come from the brain. Intelligence does not come from the computer, not can it. It's not a second order phenomena rising from computations. Intelligence is that which envisions, plans, thinks in the abstract, divides and synthesizes information, takes different perspectives into account, etc. 

The more you contemplate what intelligence actually is, you realize a computer can never do it. Think of a neural network that learns to play chess. To actually be intelligent, there would have to be another system that set up the neural network, set up the rules for the computer we have today to solve. But you might say that even today, AI can write basic code. That's true. But this problem goes much deeper. It would have to write code to solve a problem within itself to solve another problem. But in order to do that, there would have to be another system in place to diagnose THAT problem and write that code. Algorithms can never do this. They always need an intelligent outside system to set up parameters. Intelligence is scary when you start to grasp what it actually means. No 3-D object within this universe can achieve actual intelligence, that's my claim. Now I'm no expert on it, and I'm open to other perspectives, but we're talking about more than a quantum leap here. Computation alone cannot achieve it. 


"The greatest illusion of all is the illusion of separation." - Guru Pathik

Sent from my iEgo

Share this post


Link to post
Share on other sites

@TheAvatarState It's true that especially in the hard sciences the idea of intelligence is very narrow. Or as Eckhart Tolle said: "The only thing IQ tests show is your ability to solve little puzzles. Intelligence is so much vaster than that."

And Leo made a great explanation of the deeper intelligence of reality in the video I posted. Nevertheless I still believe AI can learn general intelligence by interacting in complex environments, similar to how a human baby learns through experience.

Edit: Artificial intelligence can be seen as a third order structure. The first order structure is the foundation of reality which I believe Leo is absolutely correct about that it is intelligent. Enormous intelligence! And the physical manifestation of our universe is the second order structure. AI is a third order structure since it is a product of us humans, and when the AI starts developing its own products and services that's a fourth order structure! All of that is driven by the first order intelligence producing higher and higher orders of structure.

Edited by Anderz

Share this post


Link to post
Share on other sites
7 hours ago, Leo Gura said:

YangGangers... like a pack of hungry wolves who found a bone.

sigh....

Berners... like a pack of hungry wolves who found a bone.

sigh....

Share this post


Link to post
Share on other sites
8 minutes ago, Anderz said:

I still believe AI can learn general intelligence by interacting in complex environments, similar to how a human baby learns through experience.

Question this belief. I agree that a robot can learn to walk, let's say, by interacting with complex environments. I believe this has already been demonstrated. 

This is not intelligence. "Learn to walk" was its program. An amazingly complex system had to be intelligently set up so that the neural network could learn how to walk. The AI will not decide to learn anything else. Where will it walk to and why? Another system, another program. 

Let's consider your example of the baby. A baby is born with certain programs like how to eat. A ton happens automatically like functioning organs, breathing, beating the heart, etc. These are programs, not intelligence, and the control centers do correspond to parts or the brain in the physical body. But does a baby Learn intelligence? A baby can recognize pictures and video. A baby can recognize itself in the mirror and make that connection. A baby can hear a fart and find it funny. A baby not only learns how to walk but forms a reason for walking. A baby can hear speech patterns and mimic, etc etc. That's the tip of the iceberg, and all organically. It's fucking mystical. Intelligence isn't learned. 

I'd highly recommend looking up "fungal intelligence" for more insight into the mystical nature of intelligence. Researchers in Japan studied the way mold grows.

https://www.google.com/amp/s/www.wired.com/2010/01/slime-mold-grows-network-just-like-tokyo-rail-system/amp

How did this happen? Point to the neural network. Point to the intelligence. 

 


"The greatest illusion of all is the illusion of separation." - Guru Pathik

Sent from my iEgo

Share this post


Link to post
Share on other sites

@TheAvatarState Looking at the big picture even artificial intelligence is a product of the universe, not something separate from it. The universal intelligence will as I see it "pull" AI into higher levels of intelligence just as it pulls the fungal network described in the article into its structure. From that perspective artificial general intelligence is something inevitable I think.

The key issue is the time frame for when automation really starts to replace jobs on a massive scale. True artificial general intelligence may take a long time to develop, even with the accelerating progress of technology. Or it may happen very soon! Already within a few years. Because there can be leaps of progress such as narrow machine learning suddenly becoming more general with some new approach.

Share this post


Link to post
Share on other sites
11 hours ago, Leo Gura said:

YangGangers... like a pack of hungry wolves who found a bone.

sigh....

 

So we sniff it and walk off because it didn't satisfy the hunger? Funny, that's kind of my experience with Bernie ;)


"The greatest illusion of all is the illusion of separation." - Guru Pathik

Sent from my iEgo

Share this post


Link to post
Share on other sites

@Leo Gura with all due respect I see more childish and angry behavior when Bernie supporters talk about Yang supporters. On the Yang supporter side they've actually acknowledge the good that Bernie has done and don't really have a beef with his supporters. 

Share this post


Link to post
Share on other sites

@Revolutionary Think @Revolutionary Think

15 hours ago, Leo Gura said:

YangGangers... like a pack of hungry wolves who found a bone.

sigh....

 

Agreed, Bernie supporters as a whole show a much more "radical" attitude. Besides at least half of Yang's supporters are more or less in the middle of left and right and therefore feel no need to take up pitchforks. Even in this forum, I haven't seen one Yang supporter not logically explain their position calmly. 

Edited by Progress

Share this post


Link to post
Share on other sites

@Serotoninluv Yes my definitions are relative, and yes a Honduran's definition of "just" "success" and "ethical" may even be radically be different. Hondurans however live in a third world country, they haven't even fully entered stage orange. My definitions are furthermore not just conjured up for my own sake but carefully selected to provide maximum efficiency, leeway, understanding and even compassion for the current situation of American politics now and in the next 50 years. My definitions are in short relative to this era and this country. Even the vaguest orange stage definitions of "just" "success" and "ethical" would still make sense in my writing.

Orange Stage definitions:

Just- Legal, no slavery, corruption

Success- Money, power

Ethical- Not breaking laws, or causing extreme pain, at least directly

Value- Providing something someone wants, largely measured by finances

------

You are right, there are more dynamics before and after jealousy occurs within the lower classes. Poverty, desire, corruption, lack of opportunity, income inequity. 

I am a supporter of some forms of socialism but I make a careful distinction between socialism based on jealousy or compassion. 

 

Share this post


Link to post
Share on other sites
1 hour ago, Progress said:

@Serotoninluv You are right, there are more dynamics before and after jealousy occurs within the lower classes. Poverty, desire, corruption, lack of opportunity, income inequity. 

I am a supporter of some forms of socialism but I make a careful distinction between socialism based on jealousy or compassion. 

I'm pointing to something different. I think you have created a well-thought-out construct. The other dynamics I'm pointing to aren't so much an intellectual thing. If you were a different person with vastly different life experience, you would see things differently. You seem to be categorizing SD categories in a theoretical construct. There is nothing wrong with that, it has a lot of value. Yet ime, there is more. There is a non-theoretical understanding via direct experience. You seem to be looking at others from the outside, which is important for a meta-view. Yet another component of understanding is going inside and becoming that other person. I think you are seeing a hierarchy of perspectives along a vertical axis, which has a lot of value. Yet there is also perspectives along a horizontal axis without a hierarchy. Imo, integrating both axis leads to the most holistic perspective. For example, you have twice used the term "jealousy" which is an outside view and portrayal. Seeing through a lens which interprets some as "jealous" and others as "compassionate" will miss underlying human dynamics. It is dividing groups into a vertical hierarchy and missing the horizontal axis. The families I lived with in poor third world countries were way beyond "jealous" and they had aspects of Green you seem to be missing. Within days of living in such villages, compassion transform to empathy. There is a "getting it" at a post-intellectual human level.  They had an understanding of certain forms of human dynamics, human connection, community and empathy. They had a lot of viewpoints that I would consider as valid as yours. 

A construct using "jealousy" and "compassion" regarding the poor and non-poor is a privileged viewpoint trying to control the narrative. Your perspective isn't simply relative to the current U.S. situation - they are also relative to you  - based on your life history and the lens you are wearing. It is not objective. I'm not saying there is anything wrong with your perspective and you probably know a lot more about financial theory than I do. I'm saying there is more going on. . . When I wrote of direct human experience of living in a poor Honduran village, you responded "Hondurans however live in a third world country, they haven't even fully entered stage orange". To me, this sounds like an outsider making judgments without direct experience. My guess is that you've never actually lived in a third world community. This type of direct experience expands minds beyond a particular intellectual theory. There are lots of realizations and awakenings in this area out there. 

It's just a different mode of being and seeing. I've lived in both advantaged and disadvantaged environments. When I see an advantaged perspective not incorporating a disadvantaged perspective it can be upsetting to me because the advantaged get to write the narrative and the disadvantage get marginalized - they don't get a seat at the table. The advantaged have power and influence. They are the ones that get to define terms like "value", "success" and "just". That is actually an injustice, which your definition did not include. . .  When I see this power dynamic, sometimes I want to jump in and inject some disadvantaged perspective. . . We don't seem to be on the same frequency here, which is fine.  All I have is a perspective as well. 

Share this post


Link to post
Share on other sites
7 hours ago, Progress said:

Agreed, Bernie supporters as a whole show a much more "radical" attitude. Besides at least half of Yang's supporters are more or less in the middle of left and right and therefore feel no need to take up pitchforks. Even in this forum, I haven't seen one Yang supporter not logically explain their position calmly. 

It's not like Bernie wanted his supporters to be angry and spiteful. Or, did he?

Share this post


Link to post
Share on other sites
6 hours ago, Serotoninluv said:

sometimes I want to jump in and inject some disadvantaged perspective.

Can I order 500mg of disadvantaged perspective?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now