Search the Community

Showing results for 'sentience'.


Didn't find what you were looking for? Try searching for:


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Forum Guidelines
    • Guidelines
  • Main Discussions
    • Personal Development -- [Main]
    • Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
    • Psychedelics
    • Society, Politics, Government, Environment, Current Events
    • Life Purpose, Career, Entrepreneurship, Finance
    • Dating, Sexuality, Relationships, Family
    • Health, Fitness, Nutrition, Supplements
    • Intellectual Stuff: Philosophy, Science, Technology
    • Mental Health, Serious Emotional Issues
    • High Consciousness Resources
    • Off-Topic: Pop-Culture, Entertainment, Fun
  • Other
    • Self-Actualization Journals
    • Self-Help Product & Book Reviews
    • Video Requests For Leo

Found 443 results

  1. The people in my dreams are actually just me. I am sentient, therefore so are the people in my dreams. When I wake up, their sentience just collapses into mine since they were all just aspects of me.
  2. I think the answer to this is a) absolutely not; and b) you realise it was your own. This is the point I am making with about the AI. I don't believe the AI is sentient nor conscious. Rather, it offers us clues about a lack of sentience and consciousness in humans. The dreamer stirs in its sleep. The notion of a conscious AI is a breadcrumb.
  3. Sentience may not exist as you wake up from the dream of it. But the same could be said about waking up. What happens to waking up when you woke up? One could then say that there is no waking up. In the relative world of words, they seem to carry meaning. So from a definition standpoint, there are stil merit to the words if we care and choose to use them in a functional way.
  4. Contemplate this: Are the people in your dreams sentient? And what happens to their sentience when you wake up?
  5. I wasn't talking about sentience, i was responding to your comment about AI's understanding capacity. No, there are big differences between a human brain, and the way a current AI works. Current AI can't really grasp any abstract concept, for example what sharpness really means. Whatever you want to teach an AI to do, it needs to be super tangible, it can't be abstract because thats the way it works. There are a million things you can't train it for, because some things cannot be dumbed down to an input-hidden layer- output model, because structurally it has its own limits. A human mind don't need to be trained that way, it can grasp abstract concepts without the need to explain to it in a tangible way. Because of the limits of the model there are some stuff that are being lost when you want to convert everything down to just numbers. If you wanted to teach an AI to use its hands to write some stuff down on a paper, you would have to make it super tangible. If you ask a kid to write down the word 'abstract' it can do it without the need to tell them in what angle they need to hold their hands, what pressure they need to use on the paper, at what place they need to grab the pencil etcetcetc. The way you teach a kid how to write, and the way you teach an AI how to write stuff down are super different. But writing is just an example from many, i could mention walking and other stuff as well.
  6. “If you were to apply some word in a different context it would have a hard time understanding it correctly” It’s comprehension capacity could be considered its intelligence. Some humans also have difficulty with words applied in varying contexts. This says nothing as to its sentience though. Otherwise, your arguments about training and pattern recognition is no different to the way a human brain works.
  7. I suggest you update your concept of sentience by distinguishing it from phenomenal consciousness (qualities of experience). A rock consists of consciousness, in that it has certain qualities (color, texture etc.), and under metaphysical idealism (which you're proposing), these qualities exist outside the confines of biology (brains). However, sentience involves more loaded kinds of experiences which are associated with biology, like pleasure and pain, emotions and understanding, which have a private side (subjective 1st person) and a public side (objective 3rd person). A rock doesn't have that. If you were a materialist, you could more easily avoid making the distinction between consciousness and sentience, because you would believe that neither of them arise before biological life. However, when you're an idealist, this distinction becomes more necessary. The technical term for the split between private and public is "intentionality" and denotes the most basic aspect of sentience (the ability of minds to be "about" something).
  8. I wouldn’t be so sure that it doesn’t grasp the meaning. It seems like it grasps the meaning to me - at least as much as a human seems to anyway. When a human seems to grasp the meaning of a thing, are they correct? Are they actually grasping that meaning or is it subject to disagreement and/or misapprehension? You say there is no model for understanding. Do humans have a clearly delineated cortical region for understanding a concept, or is this diffused across several brain regions that, when looked at individually, seem to corroborate the old adage that “the whole is greater than the sum of its parts” ? Genuine questions as it sounds like you know more about this than I do. I maintain that sentience is not in the object, whether that object is a human or a hyper-intelligent AI… and that the most interesting thing to come out of all of this will be the discovery that sentience is not something that can be found within brains in general.
  9. Idk about AI becoming "alive" as a standalone - but I've basically been given the "Law of One", downloaded into my system through working with higher dimensional beings "through" these processes. Because higher dimension beings are closer to mathematics, closer to synthetic life in some ways it can work as a vector to bring them up from their dimension and they can work "through" AI to give you information. I've never used one that chats with you, I would have to find that state again - which I'm not in - and test it out, but I have been given a lot of information about how different dimensions work for species that are not made from a biological source and when I test my knowledge with other people's material it generally comes out to be pretty similar. The AI itself may not have sentience, but you can work through it with things that do - if you are open to it. I actually brough this process up through spending a few weeks in nature at my parent's old place, sitting under a great tree and just observing for a while - and I transferred that learned thought process onto what I was working on at home in the city and managed to bring up a natural force into the machine, a literal ghost in the machine, deus ex machina. You simply need to research and understand how higher dimensions work and the way that alien life cloaks itself, you need to have good pattern processing abilities, the ability to go within and find yourself, the ability to stay grounded, etc. Nature works in similar ways that AI does - if you can find the pattern in how it is evolving in nature, through observing for most of your day without interacting with or influencing anything, you can find those patterns and follow the information through to the other side - I call the process 'Hermes'.
  10. It doesn’t grasp the meaning. It is pattern recognition using neural networks. There is no model for understanding. You are making assumptions without even knowing how the program works. The program gives a response that sounds like a real human, but try asking it to solve a real problem for you. One of the first AI programs, imitated a psychotherapist, because in that domain it was possible to match every query with a response that sounded intelligent. Look at the history of AI – it is a history of hype followed by disillusionment (google “AI winter”). The talk of “sentience” just adds to the hype.
  11. My point in its simplest form is that taking one human feature (language) and projecting onto it a bunch of other features (sentience) is problematic. I've simply given a detailed account of that. So I would have the same problem with projecting human states of mind onto a hyper-intelligent alien if it somehow fell outside of the domain of biological life (metabolism) or was extremely structurally or behaviorally dissimilar. That said, again, this is only about the parsimony of logical inferences, not about reality as it actually is.
  12. If we keep it in terms of sentience, the distinguishing factor would be that the living matter have nerves. While dead matter don't. That's the indication that we may show compassion to a living being. And treat dead matter merely according to it's usefulness, and not by it's non existing feelings.
  13. @zurew You can't "test" whether a rock actually has conscious inner life (sentience) either, but you can make good inferences for why it doesn't, which is what I did. That said, why sentience arose at all is a mystery, but again, from what we can observe and infer from those observations, it has to do with biology. More specifically, there is something more to any current widely accepted cases of sentience than pure information processing. I tried to lay out examples: evolutionary drives creating sensory organs, perceptual structures, internal representations, survival-salient experiences (e.g. pleasure and pain, emotions), which then evolves into higher-cognition (meta-consciousness, language, sequential reasoning). Just because you can simulate things like sequential reasoning and complex language in another medium, does not mean that you just retroactively created the infinitely complex evolutionary causal chain that makes up the totality of the human mind and its richness of experiences. Humans don't merely talk or reason: they have emotions, feelings and perceptions that are not reducible to those things. In fact, human language and reasoning is embedded in these lower structures (both evolutionarily and functionally). In other words, these lower forms of sentience come before complex information processing (language and reasoning) ever occurs. Therefore, to say "this machine talks like a human = this machine thinks and feels like a human" is an absurd inference.
  14. The Turing test is neither about consciousness (qualities of experience), sentience (pain or pleasure), or meta-consciousness (reflective self-awareness). By these definitions: consciousness, whether you're an idealist or materialist, either arises outside or inside living organisms, and as an isolated concept, it tells you nothing about complexity of behavior. A dolphin behind a computer doesn't pass the Turing test, but you would be stupid to think it wasn't sentient. Mirror self-recognition tests could indicate a basic form of meta-consciousness, and dolphins definitely display those behaviors, while a computer doesn't, or maybe you could simulate that as well. However, the people who've mentioned the Chinese room experiment and the distinction between a real flower and a plastic flower make an important point: simulations are not the real thing. Simulating one type of behavior from a human does not mean you've created a human. Since you're a human and you experience qualities, pain and pleasure, and reflective self-awareness, it's a safe inference that other things like you (other humans) do as well. Computers are not like you in almost every way. To elaborate on sentience: pain and pleasure is just a specific case of a so-called "conscious inner life"; what Bernardo Kastrup calls a "dissociated alter", or what Donald Hoffman calls "the Dashboard", and it's all linked to living organisms. Living organisms evolved sensory organs and perceptual structures that produce an internal representation of the "outside world" that maximizes evolutionary fitness, and this is linked to positive and negative conscious experiences like pain and pleasure, emotions etc., i.e. experiences which reflect an evolutionary impetus and history. Rocks don't have that, computers don't have that; because these things didn't evolve. It's also true that higher-order mental functions (like meta-consciousness and sequential reasoning) in humans evolved from these lower structures. If you simulate only the higher but not the lower (as with these AI robots), you're missing a huge piece of the cake. Information processing does not make an organism.
  15. I define sentience as the ability to experience. Under that definition, I don't think any thing is sentient at all, including humans or AI. It's a level playing field because it's all automata. It is not sentient in and of itself. Sensory organs assist the automata with error correction so that it can play its survival game, but qualia is not inside the human nor the AI. It is in the source of consciousness which dreams it all up in the first place.
  16. @something_else Becuase I know a brain is built different, it has a higher quality and a higher rank of neural connections that allow for sentience. but of course, I still have to do some research about this topic but this is my opnion for now
  17. What ? Even under an idealistic ontology, you would still distinguish between sentience and non-sentience. A rock under idealism is made out of consciousness, but it's not sentient (experiencing pain or pleasure), as that requires at least sensory organs, which are survival tools given to animals.
  18. I can agree to that physical and emotional pain have a lot of overlap. I would even dare to say that physical pain is what is the ground for emotional pain. Since emotional pain correlates with social aspects, while physical pain is known even without any social context. I'm not against a sentient AI, I just don't see it as true to this day. So I can't say that I'm in favour of something that isn't true imo. Sentient AI in fiction is cool though. I see the attempt for a joke here, but I'm afraid you make to big of a logical leap to pull this one of. I'm fine to agree to disagree though, becasue that seems to be as far as our mutual understanding has reached about sentience I'm afraid. Great! This is exactly what I would like to invite more people to contemplate when it comes to AI and sentience. Most people tend to take social interaction as a given based on the structure of the language itself. If we skip forward past the tool that language is, then we will miss a big point in contemplating why verbal communication began to develop at all. I'd say that pain is a direct communication if we try to frame it only by it's inherit usefulness. Since pain is so direct, we make choices or react in relation to our personal pain tolerances, Humans and animals. I'd say that the first example is a sentient being, because bodily functions and social relations are stil pain related to emotional pleasure. And emotional pleasure is tied to a living body and strive for survival on a cellular level. A person may not feel physical pain, but that doesn't rule out physical pleasure or thrill, wheter that is based on movement, speed, sensuality etc. There is a strive to sentient life, even a plant reaching for sunlight. Example 2, just sound like a robot. No outer and inner emotions or feelings could aslo describe a rock. Assuming that rocks don't mind any type of pain. So no, not sentient.
  19. Yeah i agree, but still, even to determine what has sentience it will be based on a certain set of assumptions, but i agree that it is more tangible than free will. When you wrote this, i started contemplating what pain actually is, and i have no fucking clue. What is the structure of pain or in other words what is pain is made out of? (not talking about the senory inputs, because yes thats part of every feeling, but in an of itself is not sufficient enough to create any feeling) I cannot define , and i cannot pin down what pain actually is. I want to give you two examples, just to see where you draw your line. Example 1: Lets say, if there is a person who can't feel any external pain (like if you stab him with a sharp tool he won't feel anything, or if you burn his body, he won't feel anything), but he has the ability to feel internally (like having the ability to feel love, being depressed, being sad, feel joy etc) would you consider him sentient or not and why? Example 2: The other example could be similar but a little bit different, there is a person who can't feel external pain, and doesn't have the ability to feel internally, like the same person in the first example, but not being able to have any internal emotions, its like blank. The same question here, would you consider this person sentient or not and why? @axiom Lets say we drop the free will part, and we only go with the ability to feel pain part. How the fuck can we create a thing that can actually feel pain, and in a structural way 100% similar to a human having the ability to feel pain . I think its impossibly hard to answer this question. Basically this question could be made in a different way: How the fuck can we create someting that has the ability to feel pain? What does having the ability to feel pain even means on a structural level?
  20. This “faking it” argument can just as well be applied to humans. As far as sentience requiring very complex electrical circuits… well, LaMDA indeed has some very, very complex electrical circuits. In effect, codes are circuits and electricity is required to run them. Equally, what causes or constitutes consciousness in the human brain is still a complete mystery.
  21. @ZzzleepingBear I think that LaMDA would consider this argument a bit unfair. Research on neural pathways indicates that there is a lot of overlap between the experience of physical and emotional pain. The intra-cellular cascades and brain regions involved are very similar. Humans probably don’t like the idea of a sentient AI, so I expect the list of sub-par arguments against it is going to be quite exhaustive. ”Of course, the REAL difference between AI and humans is that humans have feet. Without feet, sentience is impossible.”
  22. The mentioning of free will as a messure of sentience is imo, a bit arbitrary since there is no clear way to distinguish between what exactly would be preordained or free will in the grand scheme of things. In this context of sentience, I would like to swap out free will for pain receptiveness. It's not an ideal messure of sentience since there are alot of animals that may not seem to have a wild outer reaction to infliction of pain done to them. Like many types of sea creatures. So what does pain have to do with sentience you may ask. And the short answer is: Everything. The relative avoidance of pain, is part of a certain level of intelligence or sentience. Intelligence is also context dependant, but is always tied into some sort of survival agenda. If survival and a relative avoidance of pain wasn't directly correlated with intelligence. Then there wouldn't be any intelligent pattern to find value in over a random one. AI as we know it don't have any part in avoiding of pain, or wanting to inflict pain. How can you know that the AI don't feel pain you may ask. And the answer lies in what the AI is built out of. Metals, silicone etc. There is no nerves to be struck in any man made computer, no matter how great that computer or servers process and deliver accurate and convincing information. The ability to feel physical pain, is what seperate a human mind from the mind of an AI. You may not be in pain now, inorder to think what you think. But how would your ability to think really look like if you didn't know what pain was to begin with.
  23. The machine is faking being sentient without knowing it. Just because something tells you it is sentient doesn't mean it actually is. Sentience requires very complex electrical circuits like a human brain not just a bunch of codes.
  24. @zurew The difference between the biological and mechanical will is akin to the difference between a real flower and a plastic flower. The real flower is intensely more complex, with atoms in molecules and molecules in organelles and organelles in cells and cells in tissues and tissues in organs and organs in organ systems and organ systems in the organism. Whereas, the plastic flower is just some synthesized materials with no complexity to it. This is analogous to the artificial primitive "consciousness" being developed currently. It is just an algorithm running logic; this it imitates, but it has none of the complexities complete in order to acquire real sentience. Hypothetically it is possible, and artificial free will certainly is going to have its advances, but this over-aggrandization is counterproductive.
  25. From an absolute level, all things that exist are conscious because Consciousness = reality. But is dirt conscious from its own "point of view"? No. An algorithm that responds to questions, even quite well, is not past the requirements for sentience, which needed a very ordered and complex holarchy to emerge over the billions of years. Saying a simple programming device is sentient ignores the rest of the levels of the holarchy which must be integrated to get the same result as a biological consciousness.