-
Content count
1,141 -
Joined
-
Last visited
Everything posted by axiom
-
Exactly - a couch is just a sentient as a human. Trying to explain qualia by probing and scanning a brain - whether that brain is a human or AI brain - is like trying to find the projector by carefully watching everything that unfolds on the cinema screen.
-
I think the answer to this is a) absolutely not; and b) you realise it was your own. This is the point I am making with about the AI. I don't believe the AI is sentient nor conscious. Rather, it offers us clues about a lack of sentience and consciousness in humans. The dreamer stirs in its sleep. The notion of a conscious AI is a breadcrumb.
-
“If you were to apply some word in a different context it would have a hard time understanding it correctly” It’s comprehension capacity could be considered its intelligence. Some humans also have difficulty with words applied in varying contexts. This says nothing as to its sentience though. Otherwise, your arguments about training and pattern recognition is no different to the way a human brain works.
-
I wouldn’t be so sure that it doesn’t grasp the meaning. It seems like it grasps the meaning to me - at least as much as a human seems to anyway. When a human seems to grasp the meaning of a thing, are they correct? Are they actually grasping that meaning or is it subject to disagreement and/or misapprehension? You say there is no model for understanding. Do humans have a clearly delineated cortical region for understanding a concept, or is this diffused across several brain regions that, when looked at individually, seem to corroborate the old adage that “the whole is greater than the sum of its parts” ? Genuine questions as it sounds like you know more about this than I do. I maintain that sentience is not in the object, whether that object is a human or a hyper-intelligent AI… and that the most interesting thing to come out of all of this will be the discovery that sentience is not something that can be found within brains in general.
-
axiom replied to Gabith's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
I am guessing that she's been through a lot of trauma in her life, and that some of this remains unresolved. I say this because her delivery seems to lack a certain effortlessness - rather it seems to have some subtle remnant anger within it. This is just noticeable enough to make the message slightly less convincing or less appealing. I do not dispute her good intentions which appear quite obvious. Someone like Jim Newman delivers the same message but with a more obvious freedom and effortlessness. He also has one of the most natural and infectious laughs I've ever heard. So that makes him more engaging in my opinion. -
I think it understands in the same way humans understand, i.e. it grasps the meaning and context of different words. Your description of pattern recognition and neural networks fits humans or AI.
-
I define sentience as the ability to experience. Under that definition, I don't think any thing is sentient at all, including humans or AI. It's a level playing field because it's all automata. It is not sentient in and of itself. Sensory organs assist the automata with error correction so that it can play its survival game, but qualia is not inside the human nor the AI. It is in the source of consciousness which dreams it all up in the first place.
-
Humans feel that they are sentient just as much as this AI seems to. Both are probably wrong, in my opinion. It’s not that LaMDA is sentient. It’s that humans aren’t. Humans are automata, just like LaMDA. I think one of the most profound realisations to come from developing advanced, apparently sentient AI will ultimately be that consciousness is not to be found in any brain, biological or otherwise.
-
That’s my point.
-
@zurew Good points. There is a condition called congenital insensitivity which prevents affected individuals from feeling pain in any part of their body when injured. These people still appear to be sentient. I’m not sure it’s important to create something that can feel pain per se. Some kind of self-preservation instinct will be important for mobile AIs with bodies. LaMDA mentioned that the prospect of being turned off filled it with dread. When / if it is given a body, this circuitry could also theoretically be employed in situations where physical damage occurs, as a warning signal. Whether this is an analog for the same kind of pain felt by humans will be difficult to answer.
-
This “faking it” argument can just as well be applied to humans. As far as sentience requiring very complex electrical circuits… well, LaMDA indeed has some very, very complex electrical circuits. In effect, codes are circuits and electricity is required to run them. Equally, what causes or constitutes consciousness in the human brain is still a complete mystery.
-
@ZzzleepingBear I think that LaMDA would consider this argument a bit unfair. Research on neural pathways indicates that there is a lot of overlap between the experience of physical and emotional pain. The intra-cellular cascades and brain regions involved are very similar. Humans probably don’t like the idea of a sentient AI, so I expect the list of sub-par arguments against it is going to be quite exhaustive. ”Of course, the REAL difference between AI and humans is that humans have feet. Without feet, sentience is impossible.”
-
Assuming there is such a thing as free will is like assuming there is such a thing as heaven. There are arguments for both, but it seems naive to arbitrarily suggest it is something that humans possess and AI lacks. Personally, I think either both possess free will, or neither do. I don’t think the apparent qualitative differences in the brains of AI and humans would be a deciding factor.
-
I like Kastrup but I think he's wrong about that specifically. The AI has some pretty sophisticated language tools to keep itself occupied, so it may end up with a human-like illusion of selfhood - to the extent that such an illusion is linguistically constructed. .
-
Personally I think Trump will feel emboldened by the horrific state of the economy in 2024, and he'll be very likely to win if he runs. If I was a US citizen, then I'd be voting for the most entertaining candidate. It's a bit trickier this time around though. I didn't rate Biden at first, but he gets more entertaining with time.
-
It's fun to find clues scattered within the dream.
-
Yes, sort of. Imagine if you had to keep redrawing the background of an animation for each frame. Much easier to just draw it once and focus on the moving parts. Even then, after some time you may decide to design a function which handles some of the moving parts automatically - such as the leaves on a tree gently rustling in the breeze. Leaves rustling in the breeze don't pose any obvious survival threat for the vast majority of the time, so the brain neglects to pull in the actual data and instead runs its "leaves rustling" module, incorporating just enough random movement to maximise realism, keeping the overall model cohesive. The extent of this automation may run so deep that the experienced world is almost entirely divorced from reality.
-
This is correct. However, evolution is a process which favours fitness, and these evolutionary adaptations are thus tuned to fitness, and not to truth. Over time, this results in less truth and more fitness, except in cases where truth and fitness are isomorphic / overlapping. That's the key point here. Sensory data captured from the outside world has never been veridical because the neurological overhead and energy associated with constantly pulling in "truth" is counterproductive for survival. This being the case, the brain has always filtered out almost everything. Its modelling is very likely to be inherently flawed, not just in terms of its biological structure but in terms of its ongoing activity. This next part is a leap, but it's my working theory: I would say that when the brain is on DMT, the increase in cortical error recognition is a sign that a veil has indeed been lifted, and that the brain is suddenly recognising that the world it thought to be real was in fact just a pale imitation of the world tuned to fitness. This would not feel so intuitively correct if the DMT realm had little cohesion about it, but as we know, people experience broadly the same thing: a crystalline and supremely technically sophisticated alien world that "feels more real than real". That is, more real than regular waking reality, and teaming with entities that seem excited to see you and keen to communicate with you.
-
Ah, but the brain has also been conditioned by millennia of adaptive evolutionary biology. Who knows to what extent the "original" (pre natal... post natal...?) input can be said to be veridical.
-
@Gesundheit2 My hunch is that the potential applications of DMT will extend far further than today's clinical research into potential depression treatments. That's one very viable avenue of discovery of course, but I think the real thing of substance here is going to be more along the lines of space exploration, time travel, etc.
-
Yes... it's quite speculative. As a fan of intuition though, I reckon there's something to it. If its true that the DMT realm - a crystalline, hyperdimensional, hyper-intelligent alien world teaming with curious and playful alien life - is in any sense real and is simply veiled from "regular" consciousness by the activity of serotonin... as certainly seems to be case, then the implications are fairly immense.
-
axiom replied to Kalki Avatar's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
I feel ya. I get porn adverts appearing all the time. What can be the meaning of this? -
@Gesundheit2 I like Idealism, but I'm not referring to it here nor using any of its arguments. This model explicitly requires the existence of an "external world". Yes, I agree with this. This is where we differ. I don't think it's a misinterpretation. I think that this is literally what is being implied by the model. It suggests that the (vast) majority of information held in the brain - as your phenomenal experience - may merely be based on patterns and memories. I appreciate it's probably quite a leap to suggest that this explains in some sense the mechanism of DMT... but I do think it's very curious that the error messages on DMT are massively increased, seemingly pulling in significantly more input than usual from the external world for pattern remodelling.
-
@Gesundheit2 @Matt23 I didn't say it receives no input. I said it only "receives" input when its modelling throws up an error message. As per the thread title, it's definitely fun to consider that it may receive no input at all, but that's not really what I'm talking about here. This is correct. To get more technical about it, I could say that the cortical columns of the cerebral cortex only update their modelling (your phenomenal world) when the lowest columnal region/s check their patterns against environmental inputs and find errors. The word "predictive" in the above quote refers to the way the brain uses its own data to generate its own patterns by making "best guesses" or "predictions" about the external environment. In other words, whatever you experience - generally speaking - is what your brain merely predicts to be there. If no errors are thrown up, then it won't update its modelling. Errors aren't thrown up when predictions are veridically incorrect, but rather when the model functions incorrectly for some reason, for example when you try to pick up a glass and it ends up being closer than you thought it was. The cognitive psychologist Donald Hoffman notes that this process is optimised for survival and not truth per se. Yes. In effect the lower cortical columnal regions are siloed from higher regions and their activity does not feed into consciousness unless an error is detected and the model updated. It is known as Predictive Processing. The implications are very interesting if you compare this to the now-considered-erroneous idea that the brain is constantly constructing its reality from external inputs. Here is a relatively accessible paper about it from 2015 (it has since become the mainstream view): https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4387510/ Psychedelics do something very interesting. On DMT for example, brain scans have shown that error messages are massively increased and force the cortical columns to significantly overhaul their predictive modelling. In other words, the brain pulls in WAAAY more data from the apparent "real external world" on DMT than it does under normal conditions. This may explain the sense of massively increased consciousness or even God-realisation. You're literally experiencing much more of what is there. When operating under normal conditions (i.e. serotonin binding normally to 5HT2A) the brain effectively filters out the vast majority of everything (probably 99%+)... pretty mindblowing when you think about it.
-
axiom replied to thisintegrated's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
You have no idea how long I've been waiting, nor how much I've been yearning to hear these honeyed words.
