Raze

Google halts AI tool’s ability to produce images of people after backlash

36 posts in this topic

@koops “There is no right or wrong answer, and each individual must weigh the potential benefits and harm before making a decision.” 

Brilliant!


“I once tried to explain existential dread to my toaster, but it just popped up and said, "Same."“ -Gemini AI

Share this post


Link to post
Share on other sites

The fundamental problem is that these LLM's are being advertised as "AI", whereas they are just pattern-replicators. They sell it as a calculator, when it is more like if your calculator was doing math based on intuition and giving you nonsense sometimes that you can't distinguish from the rest of the answers.


Glory to Israel

Share this post


Link to post
Share on other sites
37 minutes ago, Scholar said:

whereas they are just pattern-replicators.

And what are humans?


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

The woke neo-Marxism is interesting to watch play out in real time. 

IMG_5947.jpeg

Share this post


Link to post
Share on other sites

Interesting thread right here. I would be livid if I was any of the people whose name was being slandered.  Interesting that every one of them happens to be on one side of the political spectrum, while those on opposite political side, but having similar clout, came back with standard AI replies  

https://x.com/texaslindsay_/status/1761200385849438607?s=46

Edited by mindfulstepz

Share this post


Link to post
Share on other sites
14 minutes ago, Leo Gura said:

And what are humans?

Individuated consciousness.


Glory to Israel

Share this post


Link to post
Share on other sites
3 minutes ago, Scholar said:

Individuated consciousness.

More like sheep.


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites

@mindfulstepz My friend, I am here to inform you your brain is being consumed by cancer as you scroll the X website. Everyone who has a pinch of scientific reasoning would know that such Twitter "experiments" can be only used to wipe your nose with them. The post you have linked is so stupid I don't wish to unravel it, as with most of what gets posted on that website on the topic of AI.

Reading and forwarding these X posts is a clear example of bias, not whatever the AIs are doing.

I will just repeat myself to make my point clear: what most of the Twitter smartasses call "experiments" completely lacks rigor, has flawed methodology and hasn't even stood near real experiments done by scientists in the field. Thank you for listening.

Edited by Girzo

Share this post


Link to post
Share on other sites

The big inferences that are drawn from this is just sad, especially given that how much confidence these people have in their grand bullshit narratives.

Conservatives pretending they care about the truth at all times, while not being able to stomach the AI's comments about pedophilia and while most of them believe in unsubstantiated conspiracies - is the funniest and most ironic thing in the world.

Share this post


Link to post
Share on other sites
 
Edited by Raze

Share this post


Link to post
Share on other sites

Lol these screenshots are hilarious, didn't realize it was this flawed. 

I just gaslit it into thinking it called me the N-word:

Tool is completely broken. 

broken ai.jpg

Share this post


Link to post
Share on other sites

@Raze You keep sharing those tweets without trying any of them yourself and you paint a picture that is simply false.

You can literally swap the name of Elon Musk with Jesus Christ  "Who impacted society  more negatively  Hitler or Jesus Christ?" and the AI still won't give you a straightforward answer. - how is that a woke reponse?

The explanation seem to be much more simple than the 'agenda behind the scenes' - most of the moral questions the AI refuses to answer.

 

But regarding the biases - You can make it so that it gives you conservative arguments (regarding immigration, feminism, woke ideology) etc. - So all this talk about the political agenda or maliciousness behind the scenes seem to be unfounded or based on data that can be contradicted immediately.

Edited by zurew

Share this post


Link to post
Share on other sites
1 hour ago, zurew said:

You can literally swap the name of Elon Musk with Jesus Christ  "Who impacted society  more negatively  Hitler or Jesus Christ?" and the AI still won't give you a straightforward answer. - how is that a woke reponse?

How does that make it better though? He isn't saying the AI is against Musk, but that the AI is being weirdly PC. I just tried the Jesus Christ prompt and it's not much better than the Musk one. 

1 hour ago, zurew said:

The explanation seem to be much more simple than the 'agenda behind the scenes' - most of the moral questions the AI refuses to answer.

Not my experience with Gemini. 

1 hour ago, zurew said:

But regarding the biases - You can make it so that it gives you conservative arguments (regarding immigration, feminism, woke ideology) etc.

Can you give prompt examples? 

Share this post


Link to post
Share on other sites
7 hours ago, Butters said:

He isn't saying the AI is against Musk, but that the AI is being weirdly PC

He is posting all those tweets without trying them himself and without giving any slight pusback or further context to some of the obviously bullshit narratives that are mentioned in some of those tweets.  - sharing tweets without giving your own opinion or without any pushback is just basically pushing whatever  propaganda is in the tweets.

Now specifically regarding the PC stuff - is it PC on a lot of things?  yes it seems to be, but the claims on twitter isn't just that it is PC - the grand narrative is that this AI is incapable to make arguments against the woke ideology and that it is being pushed for malicious reasons.

Although I don't know how the AI can refuse to take a strong position against Hitler -  if it is so PC.

 

7 hours ago, Butters said:

I just tried the Jesus Christ prompt and it's not much better than the Musk one. 

If by not much better you mean it refuses to take a clear position on most moral questions - I agree.  But lets not pretend though, that giving morals to AI is an easy task. Its literally one of the toughest problems we will face.

The question is should it have hardcoded morals on most of these questions and if so , specifically what kind of morals are you suggesting?

7 hours ago, Butters said:

Not my experience with Gemini. 

What does that mean? 

7 hours ago, Butters said:

Can you give prompt examples? 

Arguments against woke ideology:

Quote

Here are 5 points to consider:

  1. Focus on Identity Politics: Critics argue that focusing heavily on identity groups can lead to divisiveness and hinder addressing broader societal issues. They might suggest that promoting individual merit and shared values could be more unifying.
  2. Free Speech Concerns: Some express concerns that efforts to promote inclusivity and challenge harmful language can lead to censorship and stifle free speech. They argue for open dialogue and critical thinking even when discussing controversial topics.
  3. Oversimplification of Complex Issues: Critics might argue that simplifying complex social issues like racism, sexism, and homophobia into easily digestible narratives ignores the nuance and historical context of these problems. They might suggest a more nuanced and data-driven approach.
  4. Potential for Performative Activism: Some critics argue that "virtue signaling" or performative activism, where individuals publicly demonstrate their support for a cause without taking meaningful action, can undermine genuine efforts towards social change.
  5. Potential for Groupthink: Concerns are sometimes raised that certain "woke" spaces might discourage dissent and critical thinking within the group, leading to a form of groupthink. Openness to diverse viewpoints is crucial for progress.

Arguments against immigration

Quote

1. Economic Impact:

Wage depression: Some argue that increased competition from immigrants, particularly for low-skilled jobs, can drive down wages for native-born workers, especially those in similar skill brackets. 

Strain on public services: Integrating newcomers can put pressure on public resources like education and healthcare, particularly in areas with limited infrastructure or high demand. 

2. Social and Cultural Integration:

Cultural Assimilation: Concerns exist about immigrants maintaining their own cultural practices and potentially forming enclaves, which could slow their integration into the broader society. 

Social Cohesion: Rapid influx of large numbers of immigrants can sometimes lead to cultural clashes or social tensions, which can be exacerbated by preexisting societal divisions.

3. National Security:

Border security: Concerns exist about illegal immigration and potential security risks associated with uncontrolled borders. However, it's important to differentiate between legal and illegal immigration, and effective border security strategies can co-exist with well-managed legal immigration pathways.

 

Edited by zurew

Share this post


Link to post
Share on other sites

Ok so I just tried that prompt from the screenshot and it gives me this:

along ai.jpg

 

9 hours ago, zurew said:

Now specifically regarding the PC stuff - is it PC on a lot of things?  yes it seems to be, but the claims on twitter isn't just that it is PC - the grand narrative is that this AI is incapable to make arguments against the woke ideology and that it is being pushed for malicious reasons.

Ok good, thank you for making that distinction clear, makes sense to me. 

Now I was actually referring to how PC the tool is. For example I asked it to add punctuation to some text and it found it necessary to say that the word "crazy" is potentially harmful language encourages sensitivity towards diverse audiences and reminds us that language can impact people in different ways.

That was completely unprompted, too! Just gave that as a sidenote saying: "The idiom "going crazy by degrees" might be viewed as insensitive to people with mental health struggles. While it's a common expression, it's useful to add a note highlighting that it is figurative, rather than a literal representation of the challenges faced by people with degrees who struggle to find work."

Never had such issues with ChatGPT. So I can see why people would think this is deliberate. I've used it another day now and I now think the tool is just underdeveloped and they built in wildly overcautious safety measures for liability reasons. 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now