Jannes

Is artificial intelligence conscious?

70 posts in this topic

On 7/5/2022 at 5:42 AM, Carl-Richard said:

What do you think about the Chinese room experiment?

@Carl-Richard What is the Chinese room experiment? 

Share this post


Link to post
Share on other sites
1 hour ago, Matthew85 said:

@Carl-Richard What is the Chinese room experiment? 

It makes the case that there is a distinction between intelligence and intentionality ("understanding"). In other words you, can simulate intelligent behavior without also creating a first-person insider's view of that behavior, e.g. simulating the speaking of perfect Chinese without creating the experience of understanding Chinese.

 

Quote

Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.

The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese? Searle calls the first position "strong AI" and the latter "weak AI".

Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient papers, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output, without understanding any of the content of the Chinese writing. If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running the program manually.

Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing behavior that is then interpreted by the user as demonstrating intelligent conversation. However, Searle himself would not be able to understand the conversation. ("I don't speak a word of Chinese," he points out.) Therefore, he argues, it follows that the computer would not be able to understand the conversation either.

Searle argues that, without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in anything like the normal sense of the word. Therefore, he concludes that the "strong AI" hypothesis is false.

https://en.wikipedia.org/wiki/Chinese_room#Chinese_room_thought_experiment


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

 "Searle himself would not be able to understand the conversation. ("I don't speak a word of Chinese," he points out.) Therefore, he argues, it follows that the computer would not be able to understand the conversation either."

^this is absurd reasoning.  If you passed me a note in Chinese, and I gave it my friend who speaks Chinese, and he wrote a response for me to pass back to you, just like above, I would not understand the conversation, but it WOULD NOT FOLLOW that my friend also does not understand.  It only shows that my friend 'might not understand, but is still capable of producing answers that are indistinguishable from a person who does'.. 

Is there any difference between a person who 'understands' and one who doesn't but is capable of behaving in a way that is 'INDISTINGUISHABLE' from a person who does??? 

What's the difference? 

 


"I could be the walrus. I'd still have to bum rides off people."

Share this post


Link to post
Share on other sites

@Mason Riggle

You're certainly capable of learning and understanding Chinese, and since your friend is a human like you, it's a safe inference that he understands Chinese. The argument is that when your friend speaks Chinese, he is not merely following a program of algorithmically sorting inputs and producing outputs. However, that is what the computer is doing, and the experiment shows that when you let a person roleplay as a computer who simply runs a program (follows a set of instructions while being fed one Chinese letter at the time), the person is able to simulate intelligence (speaking Chinese) without having a 1st person understanding of it.


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites
9 minutes ago, Carl-Richard said:

@Mason Riggle

You're certainly capable of learning and understanding Chinese, and since your friend is a human like you, it's a safe inference that he understands Chinese. 

AI's are capable of learning Chinese.. is there a difference between learning and understanding?  

I'm not seeing any difference between 'understanding' and 'behaving in a way that is indistinguishable from understanding'. 

I think what everyone really wants to know is.. is it possible for AI to have the experience of 'being like something'?  Will we reach a point where 'it's like something to be a computer'?

The problem is.. there is no way to test this that could give a 100% convincing proof.  You can't even prove to yourself that there's anything it's like to be me.. you can take my word for it, you can make assumptions about me.. but you can never know if it's actually like something to be me.  

Edited by Mason Riggle

"I could be the walrus. I'd still have to bum rides off people."

Share this post


Link to post
Share on other sites
20 minutes ago, Mason Riggle said:

AI's are capable of learning Chinese.. is there a difference between learning and understanding?  

I'm not seeing any difference between 'understanding' and 'behaving in a way that is indistinguishable from understanding'. 

You've probably had one of those moments before a school presentation where you copied a page from wikipedia the last second and had to read it out pretending like you understood anything of it. If you successfully managed to pull it off (which is unrealistic, but for the sake of the argument), that would be one such case of behaving intelligently without having 1st person understanding. Of course, this is an imperfect analogy, because you had the 1st person understanding of feigning understanding, of understanding maybe some of the words but not how they fit together.

It's possible that you might've missed some details about the experiment. You need focus on the fact that the person in the room is being fed the Chinese letters one letter at the time, and that the person doesn't even know what each letter means, as the instructions for sorting them into sentences are in English. So there is no understanding of Chinese required at all.


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

@Carl-Richard you're not getting what I'm saying.  

I don't need to understand a thing in order to behave in a way that appears to you that I do, as your wikipedia analogy shows. 

How do you know that I understand anything you're saying right now? It could be that what is actually happening is- I don't understand a lick of English, but I'm copying your replies, sending them to an AI who replies in English, which I don't understand, and I paste the results as my comment as my own, and this convinces you that I understand you. 

Can you say with 100% certainty that there's anything it's like to be me?? Can you tell from my responses whether or not you're talking to a human or a computer right now?? 

How do you know I'm not an AI?  What is it that convinces you?

 


"I could be the walrus. I'd still have to bum rides off people."

Share this post


Link to post
Share on other sites
34 minutes ago, Mason Riggle said:

How do you know that I understand anything you're saying right now?

It's an inference.

 

34 minutes ago, Mason Riggle said:

Can you say with 100% certainty that there's anything it's like to be me??

No. That's why it's an inference.

 

34 minutes ago, Mason Riggle said:

How do you know I'm not an AI?  What is it that convinces you?

You're very similar to me (a functional human being), and I understand English, so it's a very safe inference that you also understand it. A computer is not similar to me, and the Chinese experiment demonstrates that if you tried to roleplay as a computer, you don't need to understand Chinese in order to produce Chinese speech.

Man, I can't wait to write my new idea for a topic now. It deals with what you're engaging in right now :) (and many others).


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites
15 minutes ago, JoeVolcano said:

@Mason Riggle It begs the question of what "understanding" really is. It's not that different from a chinese room. The conceptual mind is pretty much a chinese room that you're experiencing from the inside out.

Lol. In the experiment, understanding is just a metaphor for intentionality (private inner experience), and I doubt that you would dismiss the existence of that, because you would either have to deny that you have private thoughts or claim that you can read other people's thoughts :P

 

15 minutes ago, JoeVolcano said:

I remember seeing a Ted talk many years ago where Searle dismissed the question of free will by simply saying something like: "I think about lifting my arm, and look, now I am lifting my arm." tadaaa, free will. Like that just solved it for him. On stage. In front of an audience. I lost all respect for him that day. ;)

Cheers

The free will debate is a semantic shitshow lol


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites
23 minutes ago, Carl-Richard said:

Lol. In the experiment, understanding is just a metaphor for intentionality (private inner experience), and I doubt that you would dismiss the existence of that, because you would either have to deny that you have private thoughts or claim that you can read other people's thoughts :P

I can only ever be certain of my own subjective (private inner) experience.  I don't have to claim anything to doubt that others share this type of inner experience. Because there is room for doubt, I can not be certain. 

As long as there can ever be even the smallest bit of doubt whether or not a computer is 'really having a subjective experience', which there always will be.. the best we can EVER do is 'assume' an AI does or doesn't 'have a subjective experience'. 

There is no test that will ever prove it. 

But again, you can't even prove to yourself that I am having a subjective experience right now, the best you can do is assume (because I behave as if I do), and for whatever reason.. that's good enough for you to say 'humans have subjective experience'.. but it's not good enough to say 'AI is having subjective experience'. 


"I could be the walrus. I'd still have to bum rides off people."

Share this post


Link to post
Share on other sites

@Carl-Richard Do you think dogs have 'a private inner experience'? 

What about fish?

What about ants? 

What about starfish? Is there 'something it's like to be a starfish'? 

How about a tardigrade? 

A fishes' 'subjective experience' is surely nothing like mine or yours.. do you think a fish 'understands' what it's doing when it goes after the worm on the hook, or is it just experiencing that without any 'understanding' of it, just responding to stimuli, following it's internal 'coding'??  It's clear a fish doesn't understand (fish can't read, as far as I can tell) the same way you and I do.. but does this mean the fish isn't 'intelligent' or doesn't  'have consciousness'? 

Edited by Mason Riggle

"I could be the walrus. I'd still have to bum rides off people."

Share this post


Link to post
Share on other sites
1 hour ago, JoeVolcano said:

@Mason Riggle It begs the question of what "understanding" really is. It's not that different from a chinese room. The conceptual mind is pretty much a chinese room that you're experiencing from the inside out.

I remember seeing a Ted talk many years ago where Searle dismissed the question of free will by simply saying something like: "I think about lifting my arm, and look, now I am lifting my arm." tadaaa, free will. Like that just solved it for him. On stage. In front of an audience. I lost all respect for him that day. ;)

Cheers

exactly. 

Ever been around a blackout drunk person who has no recollection of their actions from the previous night?  How is it that they had no conscious experience, but for the people around them, that person appeared to be conscious? You can talk to them, get responses (however garbled).. yet 'they are 'unaware' of this going on, because they don't form the memories.   Do we consider that person 'conscious' at that time, or no? 

 

Edited by Mason Riggle

"I could be the walrus. I'd still have to bum rides off people."

Share this post


Link to post
Share on other sites
2 hours ago, Mason Riggle said:

I can only ever be certain of my own subjective (private inner) experience.  I don't have to claim anything to doubt that others share this type of inner experience. Because there is room for doubt, I can not be certain. 

You can doubt that your car is in the garage, or that there is more Earth over the horizon, or that the sun will rise tomorrow, but that doesn't mean your doubt is 100% true.

 

1 hour ago, Mason Riggle said:

Do you think dogs have 'a private inner experience'? 

A dog is more similar to a human than a computer. Even a single-celled organism is. You can argue that life is what inner experience looks like. But sure, if you don't care about analytically sound inferences, none of this matters.


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites
5 minutes ago, Carl-Richard said:

You can doubt that your car is in the garage, or that there is more Earth over the horizon, or that the sun will rise tomorrow, but that doesn't mean your doubt is 100% true.

I have no idea what this means.  

What I mean is, that if I doubt that my car is in my garage, I can go look to see for myself. 

What method will you use to 'look for something else's personal inner experience'?  What would 'verify' someone else's 'inner experience' for you?

Edited by Mason Riggle

"I could be the walrus. I'd still have to bum rides off people."

Share this post


Link to post
Share on other sites

@Carl-Richard We will never 'know for sure' whether or not an AI 'has some inner experience' or not.  

What we can do, is admit that when we can no longer tell the difference between 'a conscious being' and 'an unconscious being who behaves indistinguishably from one which is conscious', then there is 'effectively' no difference. 


"I could be the walrus. I'd still have to bum rides off people."

Share this post


Link to post
Share on other sites
Just now, Mason Riggle said:

I have no idea what this means.  

What it means is that if I doubt that my car is in my garage, I can go look to see for myself. 

I'm just trying to tell you that doubting everything that is not 100% certain only gets you so far.

 

4 minutes ago, Mason Riggle said:

What method will you use to 'look for something else's personal inner experience'? 

Again, it's all based on logical inferences, which are limited and potentially fallible means to knowledge. You're looking for 100% certainty, so nothing I say will satisfy you.


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

@Carl-Richard I'm not looking for anything.. Just saying that we can not answer the question - Is artificial intelligence conscious?  

What we CAN do.. is admit that we can't say, and also admit that as long as we can not tell whether or not something 'really is conscious or not'.. then it doesn't matter which it is. 

We use this same standard ALL THE TIME. 

You can't say for sure whether or not I'm actually conscious, or merely seem to be conscious to you (from your inferences), but since you can't tell which it is.. this is good enough for you to say 'good enough'. 

Edited by Mason Riggle

"I could be the walrus. I'd still have to bum rides off people."

Share this post


Link to post
Share on other sites
7 hours ago, Mason Riggle said:

What we CAN do.. is admit that we can't say, and also admit that as long as we can not tell whether or not something 'really is conscious or not'.. then it doesn't matter which it is.

Let's say somebody stole the car in your garage. Which one is the most probable explanation?:

1. A man broke a window to your garage, climbed in, opened the garage door, broke a car window, jump-started your car and drove away.

2. An alien from another dimension landed in your backyard with their spaceship, broke a window to your garage, climbed in, opened the garage door, broke a car window, jump-started your car and drove away.

Is there no value in caring about how you arrive at the most probable answer?


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

@Carl-Richard doesn’t matter.. if it was an alien, then that was the most probable.

The point is, we can be mistaken.  We’ll never know for sure if [anyone or anything] has an inner experience, or if it just seems like it, but since we can’t tell for sure, we can say, ‘good enough’.

I don’t know if you have an inner experience or not, but it seems to like you do, so… good enough.  
 

 

 

Edited by Mason Riggle

"I could be the walrus. I'd still have to bum rides off people."

Share this post


Link to post
Share on other sites
8 hours ago, Mason Riggle said:

@Carl-Richard doesn’t matter.. if it was an alien, then that was the most probable.

That's a big "if", and that was not what was meant by probability in this case (plausible/reasonable is a better word). In the scenario that was given, you can never know what actually happened. Given that constraint, which one is the most plausible explanation?

I think you actually value finding out the most plausible explanation, so you can't say it doesn't matter at all. You can only say it doesn't matter as much as actually knowing the answer.


Intrinsic joy is revealed in the marriage of meaning and being.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now