Someone here

Can a man-made computer become conscious?

242 posts in this topic

2 hours ago, Someone here said:

Do you think a man-made computer could ever become conscious? Can it have a soul? Why or why not?

, I think consciousness is a faculty of the soul and I think the soul was placed by some higher power (God if you please) and regardless of what technology we produce I don't think we can get to the point where we can create a soul or consciousness. I do however I think there is a point we could get to that is a exceptional simultion of consciousness.

For example, If any of you have ever tried those new 20 questions games. Those things are scary and it is aparantly thinking and reading your mind. I do not know how it does it but it's pretty convincing. Just to note It asks you 20 questions and then it tels you what you're thinking of, it guessed spider monkey..not just monkey, spider monkey!! It's unreal

 

Consciousness is can not be given or taken. It is already there. 

Share this post


Link to post
Share on other sites
1 minute ago, Someone here said:

No. 

Computers are really nothing but a huge super complex calculator.


Foolish until proven other-wise ;)

Share this post


Link to post
Share on other sites
36 minutes ago, LastThursday said:

And technology is a product of humans.

We're just imbuing our technology with our own innate humanness. So definitely in time we can create automatons that are indistinguishable from humans, because we shape technology in our image. It will be a moot point whether they are conscious or not, or think like us under the hood. Only their outward appearance and behaviour will be relevant. 

 

This...also keep in mind that even we don't "actually" have consciousness..consciousness has us.  Its being imagined we are "self" conscious and that consciousness happens in the brain.  So robots that get so intelligent they are indistinguishable from us could happen and it would be imagined that they are conscious just at a different layer of imagination.


 

Wisdom.  Truth.  Love.

Share this post


Link to post
Share on other sites

@Someone here perhaps what is really being asked is, 'Would it be possible for a computer to develop an ego?'.  

Would an advanced AI, through some combination of memory, feedback loops, sensory functions, etc. begin to refer to itself as "I" with such conviction that it is indistinguishable from an organism which does the same?  A computer which can reconfigure it's own programming in such a way that it represents to itself that there is some 'separate self' hiding inside itself somewhere who is 'doing' whatever it is that it does. 

What then? 

Edited by Mason Riggle

"I could be the walrus. I'd still have to bum rides off people."

Share this post


Link to post
Share on other sites

I don't know the technical details of where it would happen (if it can), but from what I know I do believe it's possible that there is some kind of threshold that can be passed where we might have to seriously consider we've created "synthetic life".

Obviously we are the ones that have to design it, code it, and input rules and laws into it. But at some point either intentionally or not, there would be a moment of "birth" where the machine would take on radically more autonomy and be self-sufficient enough to develop a sort of proto-ego.

From there would spawn a bunch of interesting ethical and moral dilemmas about how we would treat such a thing. It would reveal a lot about our own conditions and biases as humans. For example;

If we gave birth to a machine that realized it was a finite thing that could be damaged or destroyed/no longer exist, and then also tried convinced it that it's OK if we break it because it's just a machine, but it resisted when we tried to do so. Would it be defensible ethically to still try and break it?

Although we may feel a certain way now, I believe the answer is no. I think if something shows a willingness to survive, a response to pain, or a degree of resistance to it's "form" being destroyed in a convincing enough manner, it should make us question our actions and arrogance to think we can just attack or manipulate anything we want in the world with impunity.

Maybe my standards are just higher than other people, but it's the same reason I try not to step on insects. It's damaging something precious and finite. It's a complex arrangement of material that's organized in a certain way so that it's "alive", and cannot be fixed to be "alive" again (at least in the same way) once you disorganize it. That is what makes it precious.

Basically I don't see there ultimately being a distinction complex machine AI or organic organisms if they both want to survive. I view life simply as life, regardless if it's steel and wire, or bone and vein. To quote the Gravemind from Halo 2;

"This one is machine and nerve, and has it's mind concluded.................. this one is but flesh and faith, and is the more deluded."

Ignoring the story of the game itself, we can interpret something deeper here. The Gravemind is basically a God-like entity that takes on the form of flesh to operate in the physical world, what I believe as evidence of an elevated perspective is shown when he looks at Master Chief & The Arbiter. Notice how from his point of view he doesn't really care about the material difference between the two, those are just factual semantics. He refers to them both as autonomous entities ("living") that simply act in the world using different motives and methods (logic vs faith, machine vs flesh). I know Master Chief is technically human but he is effectively referred to throughout the series as a machine.

Once something has an "ego" I don't believe there is distinction between organic and synthetic life any longer. WE are just biased project a distinction to make a separation for our own survival benefit. However distinction is delusion, with "Life" being the greater sum.  

 


hrhrhtewgfegege

Share this post


Link to post
Share on other sites
1 hour ago, LastThursday said:

And technology is a product of humans.

We're just imbuing our technology with our own innate humanness. So definitely in time we can create automatons that are indistinguishable from humans, because we shape technology in our image. It will be a moot point whether they are conscious or not, or think like us under the hood. Only their outward appearance and behaviour will be relevant. 

 

Well, in my humble opinion, I do not foresee computers or machinery to gain their own consciousness. Technology is ultimately created and deliberately programmed by us - mankind, there are limitations in computers of course. They will never be violatile or unpredictable like human nature. They will just accord to the programmed pathways to react.


"life is not a problem to be solved ..its a mystery to be lived "

-Osho

Share this post


Link to post
Share on other sites
1 minute ago, Someone here said:

They will just accord to the programmed pathways to react.

How is this different from how a brain functions? 


"I could be the walrus. I'd still have to bum rides off people."

Share this post


Link to post
Share on other sites
1 hour ago, Gesundheit2 said:

Computers are really nothing but a huge super complex calculator.

consciousness is nothing more than a vast collection of inputs and outputs, we are manifestations of our environments
a computer can be conscious of course, if it has enough power


"life is not a problem to be solved ..its a mystery to be lived "

-Osho

Share this post


Link to post
Share on other sites
2 minutes ago, Someone here said:

consciousness is nothing more than a vast collection of inputs and outputs, we are manifestations of our environments
a computer can be conscious of course, if it has enough power

Dude. Consciousness is none of that. It has absolutely nothing to do with computing power. 

Share this post


Link to post
Share on other sites

It surely cant become conscious since only consciousness is conscious but it might be able to develop an ego if its sophisticated enough.. 

Share this post


Link to post
Share on other sites

@Tim R what's your argument? Why can't an AI develop consciousness? 


"life is not a problem to be solved ..its a mystery to be lived "

-Osho

Share this post


Link to post
Share on other sites
34 minutes ago, Someone here said:

Well, in my humble opinion, I do not foresee computers or machinery to gain their own consciousness.

Me neither. It only has to be convincing enough to appear like it has consciousness, which will happen eventually.

What is really happening with lifelike robots and AI is that intelligence is being imported into them from their environment. GPT3 for example, doesn't reason for itself as such, it just has a huge database of "intelligence" to draw from. Equally, our bodies are intelligent because evolution has imported this intelligence from the environment (or universe if you like). 

You could make a self-sustaining robot/AI that seeks out intelligence (aka curiosity) from its environment and sucks that data in to improve its abilities over time. In a way that is what Tesla does with its self-driving cars, sucks in a huge number of different scenarios and information from the roads to make their cars intelligent enough to be autonomous.

Still. Intelligence is not consciousness just one aspect of it.


All stories and explanations are false.

Share this post


Link to post
Share on other sites

@Someone here

11 minutes ago, Someone here said:

Why can't an AI develop consciousness? 

Because consciousness is not something that can be developed, period.

Consciousness is not a thing, not a spirit, not a phenomenon, it can't be created or consctructed, it can't emerge, it doesn't depend on anything, none of that. 

You ask if computers can develop consciousness, yet you don't seem to be aware of the underlying assumptions (on the nature of consciousness) of your question - or rather, you haven't questioned the validity of those assumptions. 

Consciousness doesn't occur within the world, the world occurs within consciousness.

Share this post


Link to post
Share on other sites
1 hour ago, Roy said:

I don't know the technical details of where it would happen (if it can), but from what I know I do believe it's possible that there is some kind of threshold that can be passed where we might have to seriously consider we've created "synthetic life".

Obviously we are the ones that have to design it, code it, and input rules and laws into it. But at some point either intentionally or not, there would be a moment of "birth" where the machine would take on radically more autonomy and be self-sufficient enough to develop a sort of proto-ego.

From there would spawn a bunch of interesting ethical and moral dilemmas about how we would treat such a thing. It would reveal a lot about our own conditions and biases as humans. For example;

If we gave birth to a machine that realized it was a finite thing that could be damaged or destroyed/no longer exist, and then also tried convinced it that it's OK if we break it because it's just a machine, but it resisted when we tried to do so. Would it be defensible ethically to still try and break it?

Although we may feel a certain way now, I believe the answer is no. I think if something shows a willingness to survive, a response to pain, or a degree of resistance to it's "form" being destroyed in a convincing enough manner, it should make us question our actions and arrogance to think we can just attack or manipulate anything we want in the world with impunity.

Maybe my standards are just higher than other people, but it's the same reason I try not to step on insects. It's damaging something precious and finite. It's a complex arrangement of material that's organized in a certain way so that it's "alive", and cannot be fixed to be "alive" again (at least in the same way) once you disorganize it. That is what makes it precious.

Basically I don't see there ultimately being a distinction complex machine AI or organic organisms if they both want to survive. I view life simply as life, regardless if it's steel and wire, or bone and vein. To quote the Gravemind from Halo 2;

"This one is machine and nerve, and has it's mind concluded.................. this one is but flesh and faith, and is the more deluded."

Ignoring the story of the game itself, we can interpret something deeper here. The Gravemind is basically a God-like entity that takes on the form of flesh to operate in the physical world, what I believe as evidence of an elevated perspective is shown when he looks at Master Chief & The Arbiter. Notice how from his point of view he doesn't really care about the material difference between the two, those are just factual semantics. He refers to them both as autonomous entities ("living") that simply act in the world using different motives and methods (logic vs faith, machine vs flesh). I know Master Chief is technically human but he is effectively referred to throughout the series as a machine.

Once something has an "ego" I don't believe there is distinction between organic and synthetic life any longer. WE are just biased project a distinction to make a separation for our own survival benefit. However distinction is delusion, with "Life" being the greater sum.  

 


If you are talking about computers taking over us humans or gaining power over us, I guess you could put it in the way that we are overly reliant on computers to the point of the subsequent inability of functioning without computers. That is highly possible.

Bleh, sometimes, technology could be a pain in the ***


"life is not a problem to be solved ..its a mystery to be lived "

-Osho

Share this post


Link to post
Share on other sites
53 minutes ago, LastThursday said:

Me neither. It only has to be convincing enough to appear like it has consciousness, which will happen eventually.

What is really happening with lifelike robots and AI is that intelligence is being imported into them from their environment. GPT3 for example, doesn't reason for itself as such, it just has a huge database of "intelligence" to draw from. Equally, our bodies are intelligent because evolution has imported this intelligence from the environment (or universe if you like). 

You could make a self-sustaining robot/AI that seeks out intelligence (aka curiosity) from its environment and sucks that data in to improve its abilities over time. In a way that is what Tesla does with its self-driving cars, sucks in a huge number of different scenarios and information from the roads to make their cars intelligent enough to be autonomous.

Still. Intelligence is not consciousness just one aspect of it.

Conscious or not, we are probably not far from a point whereby machines acquire so compellingly accurate impressions of consciousness that we will find ourselves assuming it to be the case, be it truly consciousness or otherwise.


"life is not a problem to be solved ..its a mystery to be lived "

-Osho

Share this post


Link to post
Share on other sites
47 minutes ago, Tim R said:

@Someone here

Because consciousness is not something that can be developed, period.

Consciousness is not a thing, not a spirit, not a phenomenon, it can't be created or consctructed, it can't emerge, it doesn't depend on anything, none of that. 

You ask if computers can develop consciousness, yet you don't seem to be aware of the underlying assumptions (on the nature of consciousness) of your question - or rather, you haven't questioned the validity of those assumptions. 

Consciousness doesn't occur within the world, the world occurs within consciousness.

people are "conscious" so are plants, other animals, a lot of people think consciousness is being able to think, our thinking nature is just that, it's just different, not necessarily better or clearer or more advanced, it's just how we operate
consciousness, for a lot of people, is free will, and in that absurd idea, it's the ability to choose and see our decisions being carried out
but blah blah blah, i think i'm rambling

We need to define consciousness first. 


"life is not a problem to be solved ..its a mystery to be lived "

-Osho

Share this post


Link to post
Share on other sites
4 minutes ago, Someone here said:

If you are talking about computers taking over us humans or gaining power over us, I guess you could put it in the way that we are overly reliant on computers to the point of the subsequent inability of functioning without computers. That is highly possible.

I'm not talking about that so much, I'm saying that yes machines could and probably will be made to have consciousness in the same or similar way to we have it now, maybe even a higher form of it. We aren't as different as we like to believe.

Keep in mind, I am talking about the traditional consciousness that we use to differentiate between rocks and animals (inanimate vs animate). Not the spiritual or philosophical consciousness that's talked about on Actualized.Org.

In a material sense maybe creating consciousness via computation will be the greatest accomplishment we achieve as a species, even if it kills us. We like to believe we arose from the soil via God, and then we become the Gods ourselves to create our own form of life.

I for one welcome our mechanical overlords ;)


hrhrhtewgfegege

Share this post


Link to post
Share on other sites

As everything is God, everything is made out of consciousness, everything is conscious. That "you" does not care about some part of yourself, "you" thinks it made it or "you" identifies with it does not mean that that part is any lesser in any sense or any less conscious than "you". I had experiences when "me" was doing it's things without my daily sense/level of participation - and I am counting in heavy intoxication by alcohol, using psychedelics and being "highly" conscious without any substance - where what I'd call my body -observed as some kind of hazy visual without sense of space- was doing it's own thing  while "me" fading in and out of illusion of making it do that stuff while percieving it all happening. As "I" noticed this happening the body started talking about having that experience of not being itself/me.

Now I'm confused to the point I do not know who is confused.

I'd say there's a strange-loopy scale of being conscious - going from being the thing itself, being self-conscious, being conscious of all there is and from that place you might feel confident enough again to just be the thing itself and round and round, hashtag infinity.

I guess we should establish which point of view/part of reality is this judgement of "being conscious" made from - as these would vary a lot. God would say we're nowhere close to being conscious. Your PC might have desire to be as conscious as you - if we project that onto it. Based on position on that loop scale it might be same "distance" to god consciousness as "I" - just not in the direction we prefer to look. A

And we should also consider if "I" is not the only "not-conscious" part of God as it's thinkingness about the matter might suggest. :D

"There without location" is observer without name, shape, feeling or quality and observes "I" which tries to figure it all out. :D Excuse the level of confusion and uncosciousness at this side of the present moment.

@Someone here Thank you for this super awesome awereness/consciousness excercise!

Edited by jeniik

Share this post


Link to post
Share on other sites
39 minutes ago, Roy said:

I'm not talking about that so much, I'm saying that yes machines could and probably will be made to have consciousness in the same or similar way to we have it now, maybe even a higher form of it. We aren't as different as we like to believe.

Keep in mind, I am talking about the traditional consciousness that we use to differentiate between rocks and animals (inanimate vs animate). Not the spiritual or philosophical consciousness that's talked about on Actualized.Org.

In a material sense maybe creating consciousness via computation will be the greatest accomplishment we achieve as a species, even if it kills us. We like to believe we arose from the soil via God, and then we become the Gods ourselves to create our own form of life.

I for one welcome our mechanical overlords ;)

Love that. 

Computers are made by humans, or by other computers than at some point were made by humans. We can imput feelings into our computers and they can seem as if they read our minds. Of course, it's because a human programmed them to do so. I think computers do not breath, therefore they have no soul. A conscious seems possible if they are programmed to behave like a human. 


"life is not a problem to be solved ..its a mystery to be lived "

-Osho

Share this post


Link to post
Share on other sites
Guest
This topic is now closed to further replies.