Sign in to follow this  
Followers 0
electroBeam

cool fucken research paper!

7 posts in this topic

part of my job involves inventing AI algorithms. As part of my job, I discovered this paper that I thought I would share.


A personal insight I have gained from reading this paper: human cognition and thought are not a product of neurons but a product of consciousness itself. Indeed the brain is inside consciousness rather than consciousness is inside the brain. Given that fact, psychological tools humans use are not useful (and derived from) the brain but rather are useful and are derived from consciousness.

 

Therefore if we want to make AI smarter, build algorithms around psychological tools human beings use, like the ones in this paper. 

 

coolfuckenpaper.pdf

Share this post


Link to post
Share on other sites
2 hours ago, electroBeam said:

part of my job involves inventing AI algorithms

Finally found someone into AI as well) Here are some points I wanna share:

- The reason humanity still fails at creating True AI (the one better then a human) is because most creators are Programmers who have little to no knowledge on how the human brain works. The answer already lies in the way our brains work, so why do they ignore it?

- Knowing the Spiral Dynamics + the transition from monkey to human might come of great use. The mistake they all make is trying to teach to AI lots of stuff too early. Gotta start at giving it an animal like primitive mind, then let it progress up by itself. Just put it at stage Beige and let it evolve. Yes it will take lots of time for it to evolve in the early stages, but exponential growth will do it's job.

- There are many brilliant minds, but few have access to super computers in order to create True AI. That's why I bet the birth of T AI when quantum computers will get good.

- Most scientists bet the creation of T AI in 50 years, what do you think? The biggest threat for that AI will be not humans, by other AIs. Imagine an AI in the hands of radicals... The only possible way for humans to co-exist with AI is to live under a totalitarian regime. This way "good AI" will monitor and make sure no other "bad AIs" will emerge. Getting AI under your control ain't that hard. 

 

Elon says the best option is to merge with AI, but that will be such a radical change that technically humans will change to point where they'll become different beings. Like the monkeys before us. They kinda died, and yet evolved into being us.

Here is a good article on AI:

https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

Edited by 111111

Share this post


Link to post
Share on other sites
21 hours ago, 111111 said:

- The reason humanity still fails at creating True AI (the one better then a human) is because most creators are Programmers who have little to no knowledge on how the human brain works. The answer already lies in the way our brains work, so why do they ignore it?

There's also a point to raise though, that a computer is made of hardware which works completely differently to neurons. So the configuration of neurons may not be relevant to computers. neurons are completely different to transistors. Furthermore we also need to acknowledge that human beings (biologists) were the ones to observe neurons in the brain. All humans have biases and distort the truth to a certain extent (and science believe it or not distorts the truth massively) and when biologists/neurologists were theorising about the human brain, they didn't do it in the context of AI but in the context of neurological diseases and previous theories about biology. This perspective may not be useful to AI scientists.

But furthermore, what I personally believe is, its impossible for AI to ever be true AI as long as its limited to computers. Computers work on ALGORITHMS. What ever a computer is doing, its doing it an algorithm. Of course the algorithms these days are highly complex, artificial neural networks especially, but whatever it is, its an ALGORITHM. The fundamental philosophy, limits and ideas which are inherent in mathematical algorithms, will be inherent in computer software. And if you study the fundamental mathematical philosophy of algorithms, you will be aware of the limits of them. These limits literally stamp out true AI. We may make a very powerful machine in the future, but it will always be a machine/engine, nothing more, nothing less.

Computers are engines, not brains. And we need a wildly different hardware or software platform if we ever want to make a true brain. 

21 hours ago, 111111 said:

- Knowing the Spiral Dynamics + the transition from monkey to human might come of great use. The mistake they all make is trying to teach to AI lots of stuff too early. Gotta start at giving it an animal like primitive mind, then let it progress up by itself. Just put it at stage Beige and let it evolve. Yes it will take lots of time for it to evolve in the early stages, but exponential growth will do it's job.

Actually when I was in freshman university, I built an ai algorithm to control a group/society in minecraft. I tried to get the people in minecraft to mimic the levels of spiral dynamics. Again, this is really where I first discovered my insight about the limits of an AI, the fundamental problem with trying to get the AI to spontaneously evolve, was that the AI had no capacity to invent creative insight. This is because everything in AI is predetermined (it runs off a script/algorithm) and for an AI to truly evolve through those levels, this running off a script thing wasn't good enough to cut it. We need a revolutionary computer system which isn't told what to do (AT ALL! I.E. NOT PROGRAMMED) but is rather influenced. It's thought process needs to begin spontaneously and end spontaneously.

I see quantum computers making this opportunity possible.

 

21 hours ago, 111111 said:

- Most scientists bet the creation of T AI in 50 years, what do you think? The biggest threat for that AI will be not humans, by other AIs. Imagine an AI in the hands of radicals... The only possible way for humans to co-exist with AI is to live under a totalitarian regime. This way "good AI" will monitor and make sure no other "bad AIs" will emerge. Getting AI under your control ain't that hard. 

Its interesting that you talk about a totalitarian regime. Yes AI in the context of computers would be algorithmic, so therefore the system we live under would have to be of an algorithmic nature. 

 

Honestly, while this is a dominant opinion among AI professionals, I see it as myopic. They are totally blind to the fact that computers are just machines/engines. They aren't some cool new magic spell discovered through an ancient archaic tablet of a race far beyond our universe. Computers are powerful, and AI systems are too, but only in the context of pumping out mathematical patterns, whether that's patterns in images (object detection, image classification, etc) or full decision trees (like AI in video games). 

 

There is a massive thick wall to what AI can do, and that wont be solved until we leave this algorithmic perspective. 

 

Also we needs to understand that human beings are just pattern pumping engines, or engines for that matter at all. Human beings are actually quite hard to predict, and seem to produce spontaneous insights which are simply not possible to produce on a determined, algorithmic system.

Edited by electroBeam

Share this post


Link to post
Share on other sites

@electroBeam

When it comes to the "algorithm" problem, humans themselves work mostly by algorithms. There are codes programmed into us like "Survive" "fear Death" and so on. Regardless of that, in the past 2-4 years AI has skyrocketed because it started to have a mind of it's own. Watch this please:

 Take notice at how AlphaGo AI works. It didn't brute force calculate possibilities like they (ancient AIs) do in chess, but created patters of it's own LIKE HUMANS DO

What I fear is that almost everyone sees the word "AI" labeled onto dumb stuff (which isn't really AI) and automatically think AI is nothing to worry about. Once again completely ignoring exponential growth and being very short-sighted. 

We already have supercomputers more powerful then human brains + software which can create patterns of it's own, and you think there's nothing to worry about?!

 

Share this post


Link to post
Share on other sites

It simply baffles me that @Leo Gura says it's gonna take "hundreds" of years for us to get to Turquoise. The problem is that it will be either way shorter or we won't live by that time. STUDY EXPONENTIAL GROWTH. Edge1-600x427.png 

It was taking humanity less and less time to advance up the spiral. For example stage Green countries ALREADY experience the downsides of their stage Green approach, which means some of them will start moving into Yellow. Just a few decades after they stayed in Green... 

 

Edited by 111111

Share this post


Link to post
Share on other sites
4 hours ago, 111111 said:

@electroBeam

When it comes to the "algorithm" problem, humans themselves work mostly by algorithms. There are codes programmed into us like "Survive" "fear Death" and so on. 

This isn't quite correct, while there may be some correlations between the behaviours of human beings and the properties of mathematical algorithms, and while it may be possible that an algorithm exists for some specific human behaviour, a human's behaviours are not generated or controlled by an algorithm strictly, but moreso by subjective, qualitative states. In other words, people fear death because of the 'feeling' and subjective qualia of death, not because of the laws of some possible algorithm. There is some sort of cause and effect going on (which is a property of algorithms) yet its states are qualia rather than algorithmic statements. 

There are some nuances here. Nuances that should be seriously considered. Its not possible to encapsulate the feeling of death into a symbol which can be used in an algorithm. I think this is obvious. We do not have a symbol which fully describes the feeling of fear of death. We cannot type into a computer if (feel_fear == true) { run } and expect the computer to know what feel_fear is, because we haven't defined it. Of course we could say if the algorithm is in a state where it could die, then feel fear (like in a video game), but notice that doing this doesn't fully encapsulate the feeling of fear. Human beings feel fear for many more reasons then just when they could die, they also feel fear with rejection from a relationship, etc. And notice that the amount of circumstances that a person could feel fear is infinite. Therefore we will always be infinitely away from fully defining what fear is for a computer. Even if we go the AI way of doing things, and show them scary pictures, and show them non scary pictures, the AI will never ever be 100% accurate, and we will never ever be able to show them an infinite amount of examples. On top of that we would need to do this for every single human feeling that is possible. So fearing that AI is going to take over the world like terminator is simply because people misunderstand what computers are, what human beings are and what psychology is. 

So as long as we are using algorithms, we will never get a computer which has the capabilities to perform as intelligently as a human being, because simply put, symbols are limited to how we define them, and we can never define a symbol as accurately (and for as many scenarios) as we would need. Next time you meditate, feel the feeling of fear, and notice how that feeling has infinite amount of information in it, and how defining that feeling in terms of if else statements (or alternatively NNs) is simply impossible. Or think about the amount of pictures you would need (or scenarios for a reinforcement learning algorithm) for an AI to understand all of that information which is contained in a subjective qualia like fear. 

 

4 hours ago, 111111 said:

 

Take notice at how AlphaGo AI works. It didn't brute force calculate possibilities like they (ancient AIs) do in chess, but created patters of it's own LIKE HUMANS DO

 

Sure. My argument is that true AI is not possible with computers. Just because an AI generated a set of meaningful patterns on its own does not mean that the AI is in any way intelligent. It simply, strictly meant that it generated meaningful patterns which has solved a hard problem (a problem a human being finds hard to solve).

Of course AI in the future has the potential to solve problems at any scale, it could be used to invent a superpowered death ray like the ones in starwars, or even thor's hammer, but still whatever this AI is, it will not exhibit the properties of a brain, but the properties of an engine. You put input in, and it will give you an output. It will generate some phenomena, whether that's a flying car, or a new more powerful nuclear bomb.

Its important to be aware of this difference (that is the difference between a brain or an engine). The implications of this is, AI will give human beings dangerously great abilities to invent new things, or generate knowledge but the AI itself will not be an autonomous, self aware entity with a brain on its own and a capacity to do anything beyond what an engine does (put in input, get some predefined, limited, scoped output). The AI will only be dangerous because the people using it are deliberately (or undeliberately in some cases) trying to invent dangerous things. The danger will still be directly caused by humans only, not the AI itself.

 

Nuclear power is a perfect example. The nuclear bomb killed thousands of people, yet it was because the Americans wanted to. In an undelibrate case, just turn to the 2 power plants that have exploded due to mis management or natural disasters. The power plant didn't explode because nuclear energy has a brain and mind and decided to intelligently try and kill humans, it happened because of natural physical laws.

AI is no different, and will never be for as long as we are using computers in the conventional sense.  

Edited by electroBeam

Share this post


Link to post
Share on other sites

@electroBeam Yes, me saying that we run on algorithms was an oversimplification. I just wanted to say that the human brain is not a magical/divine thing which can't be surpassed by anything. 

Ancient humans didn't understand how the weather worked, so they created gods. Present humans don't really understand how the brain works, so they think the possibility of something surpassing us is nothing to fear of. 

I don't fear the AI running on an engine. I fear that some humans will inevitably fuck up, and give AI an ego + what we call "self-awareness". Or maybe use a "dumb AI" running on said computers/engines to create the necessary hardware/software for the "true AI" everyone fears.

I am no AI expert, for I have never worked with one. But it's extremely important to listen to "outsiders", for they can get you unstuck from unhealthy thinking loops. 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0