Gesundheit2

What would it take for AI to become lively and human-like?

21 posts in this topic

A lot of programmers here, and you're all likely into this kind of deep philosophical questions, so this should be interesting.

What's the fundamental difference between AI and humans? Is it creativity? What is creativity? If you were tasked with solving this dilemma, how would you go about doing it?


Foolish until proven other-wise ;)

Share this post


Link to post
Share on other sites

Ai is dumb it can't be human like

It's too overrated right now

ask this question 100 or 150 years later right now not even close.

Share this post


Link to post
Share on other sites

@Gesundheit2

why would there be a difference between man-made, and nature made?

There’s nothing particularly magical about human mind/brains. We understand a great deal of what they are doing. The skull is filled with fatty tissue. That tissue is a network of largely self-similar components. We understand that components like these can be used to solve problems, recognise things, predict outcomes.

So I only think it's a matter of time . But of course I'm talking about hundreds of years . I don't expect self-conscious bots to be invented in this century. Maybe in the next one .

I think the fundamental difference between humans and our AIs today is simply consciousness. We are conscious. Al is not  consciousness. Unless there will be a scientific discovery of how exactly does the brain produce consciousness (which it doesn't) then there will be no progress in this situation. 


my mind is gone to a better place.  I'm elevated ..going out of space . And I'm gone .

Share this post


Link to post
Share on other sites

One way to conceptualize some of the problems with making human-like AI is "relevance realization": how to make a program distinguish between relevant and irrelevant information for performing a task. That is what makes biological life adaptive, the ability to reduce perception down to what is appropriate for a given goal, which in the larger picture is survival. That is another argument for why "conscious AI" might just have to be abiogenesis (if relevance realization can only be a product of living cells).

Edited by Carl-Richard

Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites
20 hours ago, Gesundheit2 said:

What would it take for AI to become lively and human-like?

A few more months, maybe.

Thinking exponentially does not come naturally, so I think a lot of people are vastly underestimating the rate of progress.


Apparently.

Share this post


Link to post
Share on other sites

Whenever I think of A.I I automatically think of a mind and what’s going on inside that mind, rather than a humanoid looking thing walking and sitting around.

A humanoid body is the least bit interesting to me, it’s just flashy and exciting to the herd. 

according to Mo Gawdat (former chief business officer of Google X) somewhere around year 2029 the smartest being on the planet will be a machine, his definition of creativity is pretty bare bones, “find a solution to a problem that has not been devised / created before"

Why do you think A.I will be human like? What would be the point of inventing another human again? A machine that is 100x smarter than you will be very different.

I don’t know if anyone is aware of this yet, but we are literally building aliens here, sentient beings. 

what’s cool about it is that no human has ever interacted with anything intelligent other than just another copy of himself, but we are about to. (Other than Leo :eyeroll: I’VE MET GOD)

But we can only guess. I think we are all quite shit at predicting something we’re not building directly…

my question is how do you build a human and something that is 100x smarter than it? Sounds impossible.

 

Share this post


Link to post
Share on other sites

Humans can't create a fly how can they invent something smarter than humans?

Data gathering machines are not smart. consciousness is.

you see machines are designed to do whatever the humans want it to do unlike humans who are created in the image of god and are given his attributes humans can attribute some of their attributes to machines but not like the creation of infinite intelligence.

The idea of AI having cognition like humans is impossible they don't have an ego , energy centers, feelings , spirit 

Aliens from other galaxies sounds more interesting idea than AI if they exist especially if they are smarter than humans.

Share this post


Link to post
Share on other sites

@Carl-Richard
"make a program distinguish between relevant and irrelevant information for performing a task"
This is what transformers do.

Share this post


Link to post
Share on other sites
On 21.1.2023 at 0:56 AM, Seth said:

@Carl-Richard
"make a program distinguish between relevant and irrelevant information for performing a task"
This is what transformers do.

Explain.

Think about it this way: our current programs can only be "fed" information. Relevance realization is about "grabbing" the information that is going to be fed to the program. The person using the program is like "here is the information!" and the program is like "OK, let's compute it!". To get to generalized intelligence, you need the program to do both those things at the same time at some level.

Edited by Carl-Richard

Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites
10 hours ago, axiom said:

A few more months, maybe.

Thinking exponentially does not come naturally, so I think a lot of people are vastly underestimating the rate of progress. 
 

feel like you are exaggerating big time, law of accelerating returns is a thing but ‘months’ rather than years is still a bit off

Share this post


Link to post
Share on other sites

Lex Fridman: What would illustrate to you "Holy shit this thing is definitely thinking"

Andrej Karpathy: To me thinking or reasoning is just information processing and generalization, and I think the neutral nets already do that today.

-- later on:

Andrej: So in my mind consciousness is not a special thing you will figure out and bolt on, I think it's an emerging phenomenon of a large enough and complex enough generative model. If you have a complex enough world model that understands the world then it also understands it's predicament in the world as being a language model, which to me is a form of consciousness or self-awareness.

Edited by MarkKol

Share this post


Link to post
Share on other sites
On 20/01/2023 at 3:59 AM, axiom said:

A few more months, maybe.

Thinking exponentially does not come naturally, so I think a lot of people are vastly underestimating the rate of progress.

LOL. Yeah full general artificial intelligence in few months folks. You heard it here first.

Share this post


Link to post
Share on other sites

To add to what's already been brought up, John Verveake covers the Relevance Realization problem in a ton of detail in his 'Awakening from the Meaning Crisis' series. The philosopher Hubert Dreyfus also addresses this in his book 'What Computers (Still) Can't Do'.

But the gist of it is that Reality is disclosed to human beings in a way that what's relevant about a situation we're absorbed in tends to be immediately apparent without us having to apply rules. The reason that is so is that having a body with needs requires a practical ontology (an understanding of Being) for the purposes of survival, where what Reality *is* on an experiential level is going to be coupled to what kind of creature one is.

'Being' in this context referring to our pre-reflective, nonconceptual understanding of people and objects. Being is the most foundatioal way we're able to understand a tree as a tree, a human face as a human face. It's what allows what we come across to be meaningful for us, and is pre-supposed by other forms of understanding.

When we do step back and refer to rules, it tends to be because our normal ways of skillful coping have become disrupted (such as when you run into a highly novel or unexpected situation) or when one is an absolute beginner in some domain.

Digital computers operate on different axiomatic principles than living organisms, and need to use deterministic rules to interact with their environments. The problem with using rules to try to determine what's relevant is that you also end up needing rules to apply the rules, then rules to apply those rules, ad infinitum.

This presents an intractable problem for AI because determining which of the innumerable features of one's environment are relevant for a particular purpose comes from a capacity for Care, not from applying rules. 

Organisms including human beings do not have this problem because our experience of Reality comes pre-structured so that what's relevant for our interests and purposes tends to be immediately obvious. Which is the reason why most of what you accomplish in your day to day life (walking down the stairs, brushing your teeth, recognizing faces, etc) is done almost effortlessly, without relying on any rules.

 

Edited by DocWatts

I'm writing a philosophy book! Check it out at : https://7provtruths.org/

Share this post


Link to post
Share on other sites
7 hours ago, DocWatts said:

This presents an intractable problem for AI because determining which of the innumerable features of one's environment are relevant for a particular purpose comes from a capacity for Care, not from applying rules. 

Organisms including human beings do not have this problem because our experience of Reality comes pre-structured so that what's relevant for our interests and purposes tends to be immediately obvious. Which is the reason why you're able to most of the things you accomplish in your day to day life almost effortlessly, without relying on rules.

Mmyees


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

I would consider an AI/Machine "alive" if it was programmed to survive, navigate, and interact with the world in an effort to preserve itself. After all in the most rudimentary sense what it means to be living, simply means to be not dying or "ceasing form". If a machine was behaving in a way that was acting to protect itself and resisting deconstruction, that is adequate enough for me.

Some might not find it satisfactory and will instinctually feel anything we create can never truly be alive, because it's not intrinsic if we have to program it. To which I would ask them to consider the fact we are also running on a sort of programming 9_9

Whether machine or flesh, I don't really see a difference. Perhaps that underlying assumption that there is a special "spark" is an illusion.

To quote the great Gravemind from Halo;

"This one is machine and nerve, and has it's mind concluded..."

"This one is but flesh and faith, and is the more deluded..."

You can find wisdom in the damnedest of places!


hrhrhtewgfegege

Share this post


Link to post
Share on other sites

@Roy

"A simulation is not the thing simulated" - Bernardo Kastrup

:)


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

More on the current limits of AI

 

 


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites
On 2023. 01. 21. at 0:56 AM, Seth said:

"make a program distinguish between relevant and irrelevant information for performing a task"
This is what transformers do.

Nowhere near.

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now