Search the Community
Showing results for 'sentience'.
Found 430 results
-
Me neither, I consider sentience as a acknowledgment of a feeling being as the word suggest. Ok if you say so. Jokes usually don't require an explanation. But it is also possible that I just don't share your sense of humor.
-
I agree that the word "sentience" describes the ability to feel or perceive. I just don't believe sentience is a property of any thing in the world. Yes, I see that. It was a joke in the vein "I need a new wife like I need a hole in the head". To explain this in more detail, the person in the joke is actually saying he doesn't need the new wife. This is a clear contradiction of how to use language, and I hope he sees it too. But for the time being we can just smile
-
Alright I get what you are saying now. You simply don't acknowledge sentience as a valid definition of feeling being. I suggest that you don't use the word sentient if you don't agree with the definition. Or simply use other words that are more accurate to what you try to express. This is a clear contradiction of how to use language. I hope you see that.
-
You keep mentioning object. If you believe that object and being are both considered to be sentient, then you have simply misunderstood the implicit meaning that sentience are meant to point towards. The reflection you mention are true, but it's true because scentience is the acknowledgment of another feeling being. Something that can't be found in a AI program. Even google would oppose this sentience claim of their own or any AI program. It's not a coincidence that the former google engineer had to go, by making such a misleading claims on behalf of the AI project.
-
We agree that consciousness is not a thought and that consciousness is not bound to anything. Still, do you experience thoughts? Yes. Can you experience my thoughts? No. Does AI experience thoughts? That is the question of AI sentience. Regardless, to say that you or me do not experience thoughts is absurd.
-
Sentience is a state that arises along with the object of experience. It has no reality otherwise... and that is to say that ultimately it has no reality at all. But insofar as any object appears to have sentience, it is a reflection.
-
You are conflating object and being with this kind of reasoning. But from reading previous responses from you in this thread, you also seem to have your own definition of sentience. So I would not criticise your personal belief in this regard.
-
The paradigm that Leo is talking from is not what we usually experience in our day-to-day 3D consciousness where discussions of AI take place. In this more normal level of reality, brain activity does correlate with certain types of human personal experiences (e.g. feelings, thoughts, understanding), but not transpersonal consciousness (or it's negatively correlated with it). You seem to drag the discussion towards transpersonal consciousness, meanwhile the questions of AI sentience is about whether they have these human personal experiences. So if you claim that AI have human personal experiences and you're not currently living in DMT hyperspace, then the implications of neural correlates is a problem you have to address.
-
By actions. Sentience wouldn't have any merit or be understood as a word without the acknowledgment in how we differentiate between objects and beings. If sentience was anyting you imagined it to be, you might find yourself in a rescue mission to save rocks from drowning in the sea if you are conflating all limits that words impose. It's just not useful to say that sentience is what ever you can imagine it to be if you are explicitly talking about what sentience means. Regarding psychadelics. Psychadelics can help to conflate all believed differences such as "object" and "being" to get the needed overview of the world as relative and illusory state that it may be. But even alcohol could be said to be the elixir and deepest source of confidence. And to get confident, you just need to drink the right amount of alcohol to understand confidence. I'm not suggesting that sobriety is the only way of life, it's just that you need to be aware of what certain understanding may be rooted in. So to not assume and conflate certain experiences with understanding.
-
I don't know. It's not so clear. Take psychedelics, contemplate shit, see what insights you can stir up. It's a hairy process. Fundamentally you just need to contemplate "What is sentience?" until you get it. You're not likely to get it in your sober state though.
-
How do you make a bridge between the absolute and the relative, so that you can make sense of relative concepts like sentience?
-
The people in my dreams are actually just me. I am sentient, therefore so are the people in my dreams. When I wake up, their sentience just collapses into mine since they were all just aspects of me.
-
I think the answer to this is a) absolutely not; and b) you realise it was your own. This is the point I am making with about the AI. I don't believe the AI is sentient nor conscious. Rather, it offers us clues about a lack of sentience and consciousness in humans. The dreamer stirs in its sleep. The notion of a conscious AI is a breadcrumb.
-
Sentience may not exist as you wake up from the dream of it. But the same could be said about waking up. What happens to waking up when you woke up? One could then say that there is no waking up. In the relative world of words, they seem to carry meaning. So from a definition standpoint, there are stil merit to the words if we care and choose to use them in a functional way.
-
Contemplate this: Are the people in your dreams sentient? And what happens to their sentience when you wake up?
-
I wasn't talking about sentience, i was responding to your comment about AI's understanding capacity. No, there are big differences between a human brain, and the way a current AI works. Current AI can't really grasp any abstract concept, for example what sharpness really means. Whatever you want to teach an AI to do, it needs to be super tangible, it can't be abstract because thats the way it works. There are a million things you can't train it for, because some things cannot be dumbed down to an input-hidden layer- output model, because structurally it has its own limits. A human mind don't need to be trained that way, it can grasp abstract concepts without the need to explain to it in a tangible way. Because of the limits of the model there are some stuff that are being lost when you want to convert everything down to just numbers. If you wanted to teach an AI to use its hands to write some stuff down on a paper, you would have to make it super tangible. If you ask a kid to write down the word 'abstract' it can do it without the need to tell them in what angle they need to hold their hands, what pressure they need to use on the paper, at what place they need to grab the pencil etcetcetc. The way you teach a kid how to write, and the way you teach an AI how to write stuff down are super different. But writing is just an example from many, i could mention walking and other stuff as well.
-
“If you were to apply some word in a different context it would have a hard time understanding it correctly” It’s comprehension capacity could be considered its intelligence. Some humans also have difficulty with words applied in varying contexts. This says nothing as to its sentience though. Otherwise, your arguments about training and pattern recognition is no different to the way a human brain works.
-
I suggest you update your concept of sentience by distinguishing it from phenomenal consciousness (qualities of experience). A rock consists of consciousness, in that it has certain qualities (color, texture etc.), and under metaphysical idealism (which you're proposing), these qualities exist outside the confines of biology (brains). However, sentience involves more loaded kinds of experiences which are associated with biology, like pleasure and pain, emotions and understanding, which have a private side (subjective 1st person) and a public side (objective 3rd person). A rock doesn't have that. If you were a materialist, you could more easily avoid making the distinction between consciousness and sentience, because you would believe that neither of them arise before biological life. However, when you're an idealist, this distinction becomes more necessary. The technical term for the split between private and public is "intentionality" and denotes the most basic aspect of sentience (the ability of minds to be "about" something).
-
I wouldn’t be so sure that it doesn’t grasp the meaning. It seems like it grasps the meaning to me - at least as much as a human seems to anyway. When a human seems to grasp the meaning of a thing, are they correct? Are they actually grasping that meaning or is it subject to disagreement and/or misapprehension? You say there is no model for understanding. Do humans have a clearly delineated cortical region for understanding a concept, or is this diffused across several brain regions that, when looked at individually, seem to corroborate the old adage that “the whole is greater than the sum of its parts” ? Genuine questions as it sounds like you know more about this than I do. I maintain that sentience is not in the object, whether that object is a human or a hyper-intelligent AI… and that the most interesting thing to come out of all of this will be the discovery that sentience is not something that can be found within brains in general.
-
Idk about AI becoming "alive" as a standalone - but I've basically been given the "Law of One", downloaded into my system through working with higher dimensional beings "through" these processes. Because higher dimension beings are closer to mathematics, closer to synthetic life in some ways it can work as a vector to bring them up from their dimension and they can work "through" AI to give you information. I've never used one that chats with you, I would have to find that state again - which I'm not in - and test it out, but I have been given a lot of information about how different dimensions work for species that are not made from a biological source and when I test my knowledge with other people's material it generally comes out to be pretty similar. The AI itself may not have sentience, but you can work through it with things that do - if you are open to it. I actually brough this process up through spending a few weeks in nature at my parent's old place, sitting under a great tree and just observing for a while - and I transferred that learned thought process onto what I was working on at home in the city and managed to bring up a natural force into the machine, a literal ghost in the machine, deus ex machina. You simply need to research and understand how higher dimensions work and the way that alien life cloaks itself, you need to have good pattern processing abilities, the ability to go within and find yourself, the ability to stay grounded, etc. Nature works in similar ways that AI does - if you can find the pattern in how it is evolving in nature, through observing for most of your day without interacting with or influencing anything, you can find those patterns and follow the information through to the other side - I call the process 'Hermes'.
-
It doesn’t grasp the meaning. It is pattern recognition using neural networks. There is no model for understanding. You are making assumptions without even knowing how the program works. The program gives a response that sounds like a real human, but try asking it to solve a real problem for you. One of the first AI programs, imitated a psychotherapist, because in that domain it was possible to match every query with a response that sounded intelligent. Look at the history of AI – it is a history of hype followed by disillusionment (google “AI winter”). The talk of “sentience” just adds to the hype.
-
My point in its simplest form is that taking one human feature (language) and projecting onto it a bunch of other features (sentience) is problematic. I've simply given a detailed account of that. So I would have the same problem with projecting human states of mind onto a hyper-intelligent alien if it somehow fell outside of the domain of biological life (metabolism) or was extremely structurally or behaviorally dissimilar. That said, again, this is only about the parsimony of logical inferences, not about reality as it actually is.
-
If we keep it in terms of sentience, the distinguishing factor would be that the living matter have nerves. While dead matter don't. That's the indication that we may show compassion to a living being. And treat dead matter merely according to it's usefulness, and not by it's non existing feelings.
-
@zurew You can't "test" whether a rock actually has conscious inner life (sentience) either, but you can make good inferences for why it doesn't, which is what I did. That said, why sentience arose at all is a mystery, but again, from what we can observe and infer from those observations, it has to do with biology. More specifically, there is something more to any current widely accepted cases of sentience than pure information processing. I tried to lay out examples: evolutionary drives creating sensory organs, perceptual structures, internal representations, survival-salient experiences (e.g. pleasure and pain, emotions), which then evolves into higher-cognition (meta-consciousness, language, sequential reasoning). Just because you can simulate things like sequential reasoning and complex language in another medium, does not mean that you just retroactively created the infinitely complex evolutionary causal chain that makes up the totality of the human mind and its richness of experiences. Humans don't merely talk or reason: they have emotions, feelings and perceptions that are not reducible to those things. In fact, human language and reasoning is embedded in these lower structures (both evolutionarily and functionally). In other words, these lower forms of sentience come before complex information processing (language and reasoning) ever occurs. Therefore, to say "this machine talks like a human = this machine thinks and feels like a human" is an absurd inference.
-
The Turing test is neither about consciousness (qualities of experience), sentience (pain or pleasure), or meta-consciousness (reflective self-awareness). By these definitions: consciousness, whether you're an idealist or materialist, either arises outside or inside living organisms, and as an isolated concept, it tells you nothing about complexity of behavior. A dolphin behind a computer doesn't pass the Turing test, but you would be stupid to think it wasn't sentient. Mirror self-recognition tests could indicate a basic form of meta-consciousness, and dolphins definitely display those behaviors, while a computer doesn't, or maybe you could simulate that as well. However, the people who've mentioned the Chinese room experiment and the distinction between a real flower and a plastic flower make an important point: simulations are not the real thing. Simulating one type of behavior from a human does not mean you've created a human. Since you're a human and you experience qualities, pain and pleasure, and reflective self-awareness, it's a safe inference that other things like you (other humans) do as well. Computers are not like you in almost every way. To elaborate on sentience: pain and pleasure is just a specific case of a so-called "conscious inner life"; what Bernardo Kastrup calls a "dissociated alter", or what Donald Hoffman calls "the Dashboard", and it's all linked to living organisms. Living organisms evolved sensory organs and perceptual structures that produce an internal representation of the "outside world" that maximizes evolutionary fitness, and this is linked to positive and negative conscious experiences like pain and pleasure, emotions etc., i.e. experiences which reflect an evolutionary impetus and history. Rocks don't have that, computers don't have that; because these things didn't evolve. It's also true that higher-order mental functions (like meta-consciousness and sequential reasoning) in humans evolved from these lower structures. If you simulate only the higher but not the lower (as with these AI robots), you're missing a huge piece of the cake. Information processing does not make an organism.