-
Content count
14,406 -
Joined
-
Last visited
Everything posted by Carl-Richard
-
Ok, so you're not suggesting any radical new methodologies, just existing methodologies aimed at investigating spirituality? Well, science is already doing that (and has done so for over a century e.g. William James). These are the first three results when you look up "neural correlates of spirituality" in Google Scholar: Neural correlates of personalized spiritual experiences. Classic Hallucinogens and Mystical Experiences: Phenomenology and Neural Correlates Neural correlates of a mystical experience in Carmelite nuns
-
Give me an example of a study involving spirituality that science doesn't already do.
-
The vibes and the feels of especially the later parts.
-
I see.
-
Wut
-
@Bobby_2021 The reason why I asked what it would take to change your mind is that, for this type of empirical question ("does x correlate with y?"), the weakness of your approach is that you can almost always find contradictions to your examples. And I can easily do that, so let's do an experiment: do you think I will change your mind or not? And if I don't, what does that say about relying on "your own reasoning" vs. relying on statistics derived from science? Which questionnaire are you talking about? Neither the MAAS, the FFMQ nor the FMI mention "music" in any of their questions. Chess is repetitive, but people like to associate it with high IQ. Mindfulness meditation, which often involves bringing your attention back to the breath if your mind starts to wander, is repetitive. So I was going to ask "what makes Messi have high mindfulness but not Tyson and Bolt when all of them are professional athletes who engage in "repetitive low-IQ/low-mindfulness behaviors"?", but you seemed to answer it in the second paragraph: it's because he is less physically gifted and therefore has to be more intelligent/mindful by necessity, or else he wouldn't be as successful as he is. So to change your mind, do I need to find an example of a professional football player of comparable success, who is less physically gifted and also less intelligent/mindful? Jack Grealish? So there you can see the weakness of the method you're using: you can always bring up counterexamples to any given example that you're using to support your conclusion. So how do you decisively decide that mindfulness is not correlated with physical activity?
-
Carl-Richard replied to Taya's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
Definitely. The voice of conscience (the inner maternal voice of morality and reason that tells you what and what not to do) speaks through your intuition and guides you towards higher levels of consciousness. It's heavily simplified, but evolutionarily speaking, it's reasonable to think that humans primarily developed language and a strong feeling of social responsibility because the mother had to take care of their severely underdeveloped young. So it's not surprising that you've been given an internal representation of this dynamic as a voice (and which at least for me feels like my mother's voice). You can also think of it more generally as the echoes of your ancestors. Sheldrake's morphic resonance also comes to mind (he is going to be on a panel discussion at my university in december ). -
I see all these terms as different things. Seems like you just want friends that are like you.
-
What would it take to convince you that you're wrong?
-
So you're saying my philosophical observation sucks ?
-
I had an insight, that what you could call the crisis in the field, which goes by many names (e.g. "the replication crisis", "the theory crisis", "the generalizability crisis"), is essentially a crisis of construct awareness. We've become painfully aware of how we've chosen to construct our science and how it has not been working out so well. Before, we've taken our constructions for granted and let it shape our view of the world in an uncritical way. Construct awareness involves not just becoming aware of one's own constructions, but also taking responsibility for them. And that is what the future of the field will most likely look like, judging by how vigilant the establishment is about stuffing multiple dozens of papers on the issue down the throats of new master's students; the new generation of aspiring scientists who are the only hopes for saving the field.
-
Is this based on research or is it just your intuition? It's important to not get the two mixed in this discussion.
-
Carl-Richard replied to jdc7733's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
But not Being itself -
Carl-Richard replied to jdc7733's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
Only limited beings have to work things out. -
If you were to do a psychological study of mindfulness, you start by finding someone's general definition of it. Most scientists seem to use Jon Kabat Zinn's definition: "Mindfulness is awareness that arises through paying attention, on purpose, in the present moment, non-judgementally". Then you need to find an operational definition, which is a little more specific. For mindfulness, it's most common to think of it as either a trait (relatively stable over time), a state (relatively transient) or a practice (e.g. meditation). Then, you decide how you want to measure it (e.g. using a questionnaire). In the realm of questionnaires, there are many alternatives. The one I used and which is generally most used is the Mindfulness Attention Awareness Scale (MAAS), which measures trait mindfulness. Other common ones are the Five Factor Mindfulness Questionnaire (FFMQ), Freiburg Mindfulness Inventory (FMI). So you see that already before you've arrived at a measurement, there are many forking paths, which is a part of the problem. If studying human phenomena is already hard in itself, then studying some subtle and nebulous facet like mindfulness is probably even harder.
-
Never listened to this band before, but once I heard this song, I could never forget about it.
-
I'm literally only basing what I said on a PowerPoint slide that my professor used when talking about the extent of the replication crisis, and it mentioned among other things that "brain training techniques" is sort of in the clear (not N-Back training specifically), so don't read too much into that. I still feel that it works (very scientific hehe). I thought about looking into it more (and I probably should given my concerns ). For the record though, it seems like all sciences that touch on human behavior in some form or another is implicated in the replication crisis, not just psychology: sociology, economics, biology, medicine, etc. But again, it's not all equally implicated, even within psychology. For example, some estimates say that cognitive psychology has a replication rate of 50% while social psychology only has 25%. And then you should also expect some findings to be more robust than others, especially the fundamentals of any given field (e.g. the fundamental workings of memory and perception in cognitive psychology; working memory, attention, etc.). It's usually when the hypotheses get more specific and novel that you run into problems like replicability and generalizability.
-
You don't remember anything though?
-
I've always contemplated quitting, but not so soon. Regardless, while my degree is technically a psychology degree, it also specializes in neuroscience, and there are many potential avenues there. I think my peers who're specializing in social and cognitive psychology are more worried than us neuroscience guys There are. It's just that some of the critiques seem to hit on something deep that might be incredibly hard to solve. At the same time, some parts of the field are less affected by things like the replication crisis than others (for example, the literature on the benefits of brain training techniques, which I'll go do now actually ; I'm currently doing Triple 3-Back ).
-
Which is?
-
My dude, AI can't even tie its own shoelaces. And when it can, it will probably throw psychology in the garbage. That's a brilliant way of putting it. It does feel exactly like that sometimes. So yeah, on that note of revolutionizing science, like I was alluding to earlier, when it comes to finding better solutions within the quantitative approach, you have things like Registered Replication Reports (RRRs), which is when a large group of scientists make a coordinated effort to create direct replications for a selection of studies that seem important to the field. The author in the "generalization crisis" paper problematized this by saying that direct replications don't actually matter with respect to the conclusions if the original findings over-generalized their findings, so there needs to be a similar effort for so-called "conceptual replications" (i.e. where instead of keeping the research design fixed like in direct replications, you intentionally vary the research designs to see where the correct level of generalization lies). You also have efforts like the Psychological Science Accelerator (PSA) that work similarly to RRRs by more generally coordinating research around a select set of topics. One of the critiques against psychology as a field is the lack of "cumulative research", i.e. research that gets worked on over time and gathers a solid empirical basis. The alternative is often that some research gets started and then forgotten when people find out it's bunk, which leads to a constant cycle of jumping from hypotheses to hypotheses in a way that severely fragments the field. It's uncertain to which degree it's possible to lessen the fragmentation of psychological research, because the human mind is in a very fundamental sense multi-faceted. This then feeds into the "paradigm debate" (about whether psychology should be considered a single paradigm or multiple paradigms); something which Kuhn himself used to conclude that psychology was not a real science, because he thought a real science consists of a shared paradigm that creates a focus around cumulative research). As for securing the quality of research in more applied settings (e.g. COVID-19), you have proposals like «Evidence Readiness Levels» (ERLs) inspired by NASA's "Technology Readiness Levels" (TRLs) for testing technology related to aeronautics and space.
-
It's funny you say that, because some critique from another paper I've read talked about how most psychology research, partially due to the relative death of behaviorism and birth of cognitive psychology, has shifted away from actually observing behavior and over to trying to explain behavior using some theoretical mechanism and measuring that mechanism using some indirect measurement like a questionnaire. So the question then becomes: is psychology any longer a study of behavior, or is it a study of speculations about mechanisms behind behavior and indirect measurements of said behavior? Mmm. The paper that talks about the "generalization crisis" says that one possible course of action for psychology is to move away from quantitative approaches and embrace qualitative approaches, i.e. approaches that don't rely on inferential statistics for their conclusions. So the field limits itself to descriptive statistics (e.g. "the study looked at 20 women in a school setting"), makes interesting observations and verbal conclusions (e.g. "these people think x, these other people think y"), and avoids conclusions like "the p-value is less than .05, therefore there is a statistically significant correlation between the two variables, and therefore our hypothesis has been corroborated". The problematic part about the latter that causes the generalization crisis is the inferential jump from "there is a significant correlation" to "our hypothesis has been corroborated", because the specific conditions that produced the statistics (the research design; the stimuli, experimenters, measurement instruments, etc.) are usually not mentioned in the hypothesis. Hence there is a tendency to generalize from one research finding to a larger conclusion when that is not actually warranted. But you might say that if we choose to only make highly specific conclusions, e.g. "there was a significant correlation in this setting, given these research parameters, etc.", we can avoid the generalization problem, but this puts the cart before the horse in the sense that we still use these generalized hypotheses to drive our research and focus in on what we think is relevant. If we do continue making research questions around a general hypothesis but only report highly specific conclusions that never actually address the general hypothesis in any satisfying way, then we're never getting what we're after. We're instead fooling ourselves in a way, doing what Feynman called "cargo cult science"; thinking we're doing important work because the cool-looking statistics seem to add up, but it never actually points to any important conclusion. So what both of you said seems to point in that direction of embracing qualitative over quantitative research, i.e. merely inquiring into some behavior or asking somebody for their experiences without relying on any hard statistics for your conclusions, a bit like philosophy. That said, there are attempts at improving the quantitative approach which may help save it somewhat, but I'll maybe save going into that for later.
-
That is the problem, isn't it? Does it only sound like we're doing science or are we actually doing science?
-
Carl-Richard replied to davecraw's topic in Spirituality, Consciousness, Awakening, Mysticism, Meditation, God
Have you heard about the concept of top-down and bottom-up processing? It shows how your experience of the world is neither fully explained by discrete sensory inputs nor by a fully dissociated imagination. So your question might involve a false dichotomy (the example I brought up is not the only way to make that case). -
Meta-meta-systematic reviews of double-blind placebo-controlled clinical trials, large-scale registered replication reports — only the highest level of scientific evidence on the matter.