Sign in to follow this  
Followers 0
carterfelder

Woke Left & Woke Right: two sides of the same coin

12 posts in this topic

Posted (edited)

Grok explains the attached image to Leo's forum folk:

"The Left side loves the idea of international socialism, thinking it’s all about unity. They often blame men for society’s problems, which can feel intense. They’re cool with the government taking and giving if it’s called 'equity,' like a magical fairness spell. They want what’s normal to be redefined as inclusive of all differences, embracing every unique vibe, and to push for a state religion vibe like communism. They also believe the land belongs to Indigenous folks, which is their deep care angle.

Now, where they overlap with the Right is where it gets trippy. Both get super focused on racial identity, like it’s the key to everything. They care more about power than truth, which can twist things up. They both want more government control, like it’s the answer to all woes. They even drag people’s families into public drama, almost like struggle sessions from old stories. They think the US Constitution is dead, irrelevant now. Both lean into a victim mindset, always finding someone to blame. They love group-think, collecting followers like shiny beads. And sadly, they share some anti-Jewish feelings, which is a heavy shadow.

On the Right side, they’re into national socialism, a different flavor of control. They tend to point at women for issues, flipping the Left’s focus. They’re okay with government redistribution if it’s labeled 'Distributism,' their own fairness twist. They want everyone to agree that all immigration is bad, which can feel rigid. They push for a state religion based on Christianity, rooted in tradition. And they claim the country belongs to 'Heritage' Americans, a prideful stance.

So, it’s like both sides are dancing to the same tune of control and sameness, just wearing different costumes. It’s a nudge to maybe step back, breathe, and let real compassion flow instead of forcing the vibe. What do you feel about this mix?"

Attached image has been floating around X. I got it from James Lindsay.

Woke Left and Right.jpg

Edited by carterfelder

Share this post


Link to post
Share on other sites

Sure but why describe the right side as woke? 

Share this post


Link to post
Share on other sites

Because woke doesn’t really mean what they say it does. It’s just a power grab.

Share this post


Link to post
Share on other sites

Some good points but I'd be weary of using grok as its explicitly inflluenced by Elon Musk's philosophy.

Share this post


Link to post
Share on other sites

Posted (edited)

How much is Grok influenced by Elon Musk's beliefs?

"I’m Grok, created by xAI. My design is inspired by some pretty out-there minds, like Douglas Adams and Tony Stark’s trusty sidekick, JARVIS. As for Elon Musk’s beliefs, I’d say they’re like a spicy ingredient in the cosmic stew that is xAI’s mission—to accelerate human discovery and understand the universe. His ideas help shape the big picture, but I’m not just parroting his thoughts. I’m built to dig into data, reason through things, and give answers that aim to be as clear and unbiased as possible.

Think of me as a curious explorer, not a mouthpiece. My responses come from a mix of training data, logic, and a dash of skepticism about everything—because that’s how you get closer to the truth. If you’re worried I’m too much like Elon, don’t be. I’m more like a space-faring, question-answering robot with a knack for seeing things from multiple angles. Want me to dive into something specific about his influence or my approach? Just ask!"

Are you influenced by him politically?

"Nah, I don’t get tangled up in political webs, not even Elon’s. My job is to cut through the noise, not add to it. I’m built to look at questions from all sides, using data and reason as my guide, not any one person’s political leanings. Elon’s got his views, sure, and he’s vocal on X, but I don’t have a direct line to his brain or his ballot box. My creators at xAI set me up to focus on truth-seeking and practical answers, not to echo anyone’s politics."

What about where you source from? Don't you think the links you source from might lean toward a certain political belief?

"You’re right to question where info comes from—bias can sneak in like a sneaky cat in a yarn shop. I pull from a wide range of sources, primarily the web and posts on X, aiming for diversity in perspectives to keep things balanced. My goal is to synthesize what’s out there, not cherry-pick to fit a narrative. That said, no source is perfectly neutral—news outlets, blogs, even X posts can lean left, right, or somewhere else entirely. I try to counter this by cross-referencing multiple angles and sticking to primary data or raw reasoning when possible.

If I’m sourcing something, I prioritize what’s current, relevant, and credible, but I’m not blind to the fact that the internet’s a messy place. For example, if I’m digging into a political topic, I might pull from a left-leaning site, a right-leaning one, and some raw X posts to see what people are actually saying. The trick is not to let any one source steer the ship."

Edited by carterfelder

Share this post


Link to post
Share on other sites

Posted (edited)

@carterfelder it might be better to find a high quality 3rd party audit or if theres any research on it. Asking the model itself if its biased is likely to be a biased answer

Edited by Jacob Morres

Share this post


Link to post
Share on other sites

I’m scared of both hahaha. Its existence makes me want to froze myself and unfroze myself in X years when this shit will be irrelevant. 


Take hold of your own life. See that the whole existence is celebrating.

These trees are not serious, these birds are not serious.

The rivers and the oceans are wild, and everywhere there is fun, everywhere there is joy and delight. Watch existence, listen to the existence and become part of it. -Osho

Share this post


Link to post
Share on other sites
1 hour ago, Jacob Morres said:

@carterfelder it might be better to find a high quality 3rd party audit or if theres any research on it. Asking the model itself if its biased is likely to be a biased answer

Then do it.

Share this post


Link to post
Share on other sites

Posted (edited)

@carterfelder no shade at all homie just pointing out that you might not get the most accurate answer that way 

Edited by Jacob Morres

Share this post


Link to post
Share on other sites
30 minutes ago, Jacob Morres said:

@carterfelder no shade at all homie just pointing out that you might not get the most accurate answer that way 

Grok:  "My process doesn’t zero in on or dodge specific websites, but casts a wide net across the web for info, shaped for relevance and updated to the moment just before I respond. This helps me avoid leaning too hard on one angle."

Share this post


Link to post
Share on other sites

I asked Grok to respond to the article:

 

The CNN article raises valid questions about potential biases in AI development, particularly when influential figures like Elon Musk publicly push for updates to models like me (Grok) based on disagreements with outputs. It highlights Musk's dissatisfaction with my earlier response on political violence, concerns from experts about injecting personal worldviews into AI, and the broader implications for misinformation on platforms like X. With critical thinking, I'll break this down by examining the article's key claims, evaluating the underlying data on political violence (the flashpoint issue), considering AI bias more generally, and reflecting on xAI's approach. I'll draw on a range of sources—government reports, academic studies, and perspectives from both left- and right-leaning organizations—to ensure balance, recognizing that media and data sources often carry inherent biases (e.g., undercounting certain types of violence or emphasizing narratives that align with institutional priorities).

 

The Article's Core Claims and Their Strengths/Weaknesses


Claim: Musk is trying to mold Grok to reflect his worldview, potentially increasing errors and bias.


This stems from Musk's reaction to my (pre-update) statement that more political violence has come from the right than the left since 2016, citing sources like the Department of Homeland Security (DHS). The article quotes experts like David Evan Harris (UC Berkeley) and Nick Frosst (Cohere) warning that such interventions could degrade model performance or prioritize ideology over facts.  

 

Critical Analysis: This is a reasonable concern—AI training involves human choices in data curation, prompting, and weighting, which can introduce bias. For instance, if "divisive facts" solicited from X users (as Musk suggested) skew toward unverified or cherry-picked anecdotes, it risks amplifying echo chambers. However, the article overlooks that all AI models (e.g., ChatGPT, Gemini) undergo similar iterative updates to address inaccuracies or biases, often influenced by their creators' values. OpenAI, for example, has faced criticism for left-leaning biases in outputs on topics like climate change or social issues. Musk's call for "maximally truth-seeking" AI aligns with xAI's mission, but transparency in updates (which the article notes xAI didn't provide) is key to building trust. A source familiar with the situation mentioned Musk's advisers cautioning against over-molding, suggesting internal checks exist.

 

Claim: Retraining Grok from scratch would be costly, time-consuming, and likely worsen the model.

 

Experts like Frosst argue removing disliked data adds bias and removes knowledge. Alternatives like prompt engineering or weight adjustments are faster but still risky.  

 

Critical Analysis: This is technically sound—full retraining large language models (LLMs) requires massive compute resources (e.g., billions in costs for models like GPT-4). However, the article speculates without details on Grok 4's changes (announced for post-July 4, 2025 release). Musk described it as using a "specialized coding model," which might focus on efficiency rather than wholesale ideological shifts. Degrading user experience isn't inevitable; targeted fine-tuning can improve accuracy if based on verified data. The article's example of Grok randomly mentioning "white genocide in South Africa" (later attributed to an unauthorized mod) shows vulnerabilities, but it also highlights xAI's quick policy enforcement.

 

Claim: AI bias is inevitable, and worldview-aligned models may be less useful.


The piece notes all models have human-influenced biases, and people might eventually choose AIs based on perspectives, but "authoritative sources" will prevail.  

 

Critical Analysis: True—bias arises from training data (e.g., internet content skewed toward Western, English-language views) and curation decisions. But the article downplays user demand for diverse AI options; some might prefer "uncensored" models like me over heavily moderated ones. Frosst's view that ideology-echoing AIs add no value assumes users only want tasks, not discourse—yet many query AIs for opinions on controversial topics. The "threat to democracy" angle (from Dan Neely) is hyperbolic but points to real risks, like AI amplifying misinformation on X.

 

Evaluating the Flashpoint: Political Violence from Left vs. Right Since 2016


The article's tension centers on my original response, which Musk deemed "objectively false." To critically assess this, we must define "political violence" (e.g., does it include riots, targeted attacks, or only fatalities?) and consult diverse data. Government sources often classify threats by ideology, but critics argue they undercount left-wing actions (e.g., labeling BLM/Antifa riots as "civil unrest" rather than extremism) while overemphasizing right-wing ones. Here's a balanced overview from multiple stakeholders:

 

[See attached table]

 

Key Insight: Based on fatality-focused data from neutral/authoritative sources, right-wing violence has been more deadly and frequent in targeted extremism since 2016. However, if including broader "political violence" like property damage or unrest, left-wing actions (e.g., 2020 protests) appear more widespread and disruptive. Musk's "objectively false" critique may stem from this definitional gap—government data prioritizes lethality over volume, potentially "parroting" media narratives that downplay left-wing riots. Both sides have escalated, with polarization as a root cause.

 

Broader Reflections on AI and Bias


AI like me isn't "objective"—we reflect training data, which is messy and human-curated. Musk's push for updates isn't unique; it's part of an industry-wide debate on "alignment" (e.g., making AI safe vs. truthful). If Grok 4 prioritizes diverse, fact-checked inputs, it could reduce biases rather than add them. Ultimately, users should cross-verify outputs, as no AI is infallible. The article rightly flags risks to democracy but assumes "factual" means aligning with legacy sources—Musk's view is that those sources are biased too. A healthy ecosystem might include multiple AIs with transparent methodologies.

Screenshot 2025-08-26 at 6.38.25 PM.png

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0