Sincerity

The Future of Moderation is in AI

26 posts in this topic

I think moderation of social media platforms (and eg. a forum) is something AI would easily cover. A level of human oversight would still be needed, of course. But basically 80% of moderation work could be handed over to AI and perhaps it'd do its job even better.

For those familiar with agentic AI: an AI agent "Social Media Moderator" could be created with detailed instructions and attached guideline documents on what to moderate and to what degree. A trigger event for this agent could be a post report, for example.

How it could work on this forum: when someone reports another user's post, that post is run through the AI agent that determines whether there is any action needed. If there is, then the AI agent can perform the appropriate actions - hide the post, shoot the user a message, warn the user, perhaps give them a posting restriction. This is all doable with the right tools and software. But then there would still be a level of human oversight that could override AI decisions.

This would all require iterative feedback, of course. The agent would have to be adjusted across a span of time to finally find the golden middle and moderate the space in a desirable way. Also, there's a lot of technicalities that would need adjusting - since agent calls could be abused and be costly, there'd have to be rules like "posts older than 1 week can't be reported" or "if an user reports a few posts in a row that weren't guideline-breaking, the user can't report posts for a given time".

I'm just presenting it as an interesting idea. Just another thing that can reasonably be delegated to AI. 9_9 LMK your thoughts, I think this could be an interesting discussion.

Edited by Sincerity

Words can't describe You.

Share this post


Link to post
Share on other sites

I think it would be great. Users will be more confident in the decisions made and there would be less of a perception of bias. Less room for human error.

I can envision cases where the AI may have to defer and escalate to a human to oversee. 

I can also imagine users claiming AI had a programming bias 🤣

 


It is far easier to fool someone, than to convince them they have been fooled.

Share this post


Link to post
Share on other sites

This post should be featured, no question. It’s ironic how AI might end up being more neutral than the people trying to be.

 

 

 

Share this post


Link to post
Share on other sites
1 minute ago, Natasha Tori Maru said:

I think it would be great. Users will be more confident in the decisions made and there would be less of a perception of bias. Less room for human error.

Exactly!

2 minutes ago, Natasha Tori Maru said:

I can also imagine users claiming AI had a programming bias 🤣

That too. 🤣

On a consciously led social media platform, the AI agent's full instructions could be made transparent, so that everyone knows what they're dealing with.


Words can't describe You.

Share this post


Link to post
Share on other sites
4 minutes ago, Natasha Tori Maru said:

I can also imagine users claiming AI had a programming bias 🤣

 

People never take a break, do they? 😅

 

Share this post


Link to post
Share on other sites
1 minute ago, Monster Energy said:

People never take a break, do they? 😅

 

When it comes to people - there will ALWAYS be something to gripe about 😂


It is far easier to fool someone, than to convince them they have been fooled.

Share this post


Link to post
Share on other sites

You have to be careful, when the immune system of a system is too "perfect" it can lead to a cessation of progress.

There are two important components here: Stability and degrees of freedom. 

When collectives evolve, it is basically a balance between these two things. We can use the law as an example:

If we had a way to enforce the law perfectly, and reduce all crime completely such that at the very inception of a crime it is detected and stopped, you would get a society that might get frozen in that state of development. The reason for this is because any law at any given time is a reflection of the state of development of society at that particular point in time.

If you had a law against drugs and mind-altering substances, nobody in that society would ever get to experience psychedelics if the law-enforcement was perfect. it is obvious when the laws are clearly and obviously wrong from our point of view, but the issue is society never knows what exactly it's blindspots are.

 

The danger of an AI based moderation system is therefore that it is overly good at achieving it's goals. For example, you could have an AI system that will not merely work by keywords, but by genuinely analysing the content of any post and deriving the meaning, and then making a decision if that content should be banned, hidden from the algorhythm etc on the basis of whatever set of rules the platform established.

If you do this, you might render it impossible for marginalized forms of speech, that might be considered controversial and dangerous today, to actually find any exposure in this new world. 

Here would be various examples for speech that might be completely erased with such a system today:

- Speech about drugs/psychedelics considered to be dangerous, and positive advocacy for such drugs.

- Speech that relates to controversial moral discussions

- Speech that relates to sexual minorities demonized by a society

- Speech that relates to spiritual truths too radical/threatening for people

- Speech that has negative economic impacts on the platform it is published

 

In a human collective, you basically want to leave some wiggle room for criminality so that forms of criminality that are actually justified can take place. Perfect enforcement would undermine the dynamics that are necessary for social evolution. 

I think it is easy to underestimate just how important this wiggle room is. Giving humans ultimate power is so dangerous because human beings are ignorant and immature, which will be reflected in how they will wield this power.

Share this post


Link to post
Share on other sites
11 minutes ago, Natasha Tori Maru said:

I can envision cases where the AI may have to defer and escalate to a human to oversee. 

Actually, an AI agent could have a surprising level of complexity, very comparable (if not better) than that of a human.

When a post is reported, the AI could also take under consideration everything relevant that was said before in the thread. It could also take under consideration how many warnings the user already has. Et cetera. Also when doing its moderation actions, the agent would lay out its full reasoning process step by step, so that the context is clear to the moderator who's overseeing the AI.

These AI agents are not stupid anymore. They could handle really complex cases.


Words can't describe You.

Share this post


Link to post
Share on other sites

@Scholar I think this is why full transparency of the agent's instructions would be important. But that'd only happen on a consciously led platform.

Of course we wouldn't want to marginalize views. In my head the AI would only be enforcing guidelines that human moderators are already tasked to enforce.

Again, it'd have to be an iterative feedback process and a golden middle would need to be found.

Edited by Sincerity

Words can't describe You.

Share this post


Link to post
Share on other sites
6 minutes ago, Sincerity said:

@Scholar I think this is why full transparency of the agent's instructions would be important. But that'd only happen on a consciously led platform.

Of course we wouldn't want to marginalize views. In my head the AI would only be enforcing guidelines that human moderators are already tasked to enforce.

As I said, it'd have to be an iterative feedback process. And a golden middle would need to be found. Of course no one wants a platform where you can't express anything even slightly controversial.

This doesn't matter, society would agree with the agent, that's the whole danger. Society is ignorant, if society could have banned transsexuality or psychedelics from becoming a popular topic, they would have of course done so.

 

The issue is that we WOULD want to marginalizes views, that's exactly what people want. We already marginalize certain views with algorhythms and human moderation.

Again, just analogize it to law enforcement: If we had perfect enforcement of drug laws, nobody here would be talking about psychedelics. You are basically asking for superhuman cops that can catch every crime at it's inception.

The good thing about humans is that they are flawed, that they allow for enough wiggle room for marginalized perspectives to eventually find expression in society. With superhuman AI, or just cheap human-like AI on mass scale, you can forget about that in the future.

Edited by Scholar

Share this post


Link to post
Share on other sites
23 minutes ago, Scholar said:

This doesn't matter, society would agree with the agent, that's the whole danger. Society is ignorant, if society could have banned transsexuality or psychedelics from becoming a popular topic, they would have of course done so.

The issue is that we WOULD want to marginalizes views, that's exactly what people want. We already marginalize certain views with algorhythms and human moderation.

Well, but trans sexuality is not banned on current social media platforms, for example. Why would the AI change that? Your argument is based in human will to marginalize. But human will is already the case even with algorithms and human moderation, no?

Also, I think free market could do its job if a given social media platform became VERY oppressive with its AI moderation. Another one would arise which would be less oppressive, and that one people would potentially flee to. But also maybe not, I won't die on this particular hill.

Edited by Sincerity

Words can't describe You.

Share this post


Link to post
Share on other sites

@Natasha Tori Maru You have to program moderator ai to have a bias. Thats the purpose of a moderatoration. You cant be non bias and be a moderator.

Share this post


Link to post
Share on other sites

The other side of the coin:

1. AI moderation would make this place much more colder and lifeless. The role of a moderator is not just to be pragmatic, but also to show leadership, to be a role-model and to support (emotionally) with whatever capacity they have. A human element cannot be replace by AI, by definition.

2. Being a moderator is a huge chance at growth, right fellow mods? ^_^

Edited by Miguel1

Connect with me on Instagram: instagram.com/miguetran

Share this post


Link to post
Share on other sites

All Mods will be replaced with my swarm of AI girlfriends.

:P


You are God. You are Truth. You are Love. You are Infinity.

Share this post


Link to post
Share on other sites
1 hour ago, Sincerity said:

Well, but trans sexuality is not banned on current social media platforms, for example. Why would the AI change that? Your argument is based in human will to marginalize. But human will is already the case even with algorithms and human moderation, no?

Also, I think free market could do its job if a given social media platform became VERY oppressive with its AI moderation. Another one would arise which would be less oppressive, and that one people would potentially flee to. But also maybe not, I won't die on this particular hill.

The problem is the perfect enforcement of human will. Transsexuality is an example that would have applied 20 years ago, the things that are marginalized today are obviously outside of the consciousness of the mainstream.

Humans did try to suppress LGBTQ speech in the past, the whole point is that they failed to do so because human beings are imperfect, a system of AIs the way you propose would be able to have oversight over every single piece of content that is posted.

 

People only care about transsexuals today because these marginalized perspectives were pushed forward despited cultural censorship and backlash, because there was no perfect speech moderation back then.

People wouldn't flee to another platform, people would actively want the platform to enforce that type of speech, like 20 years ago, a majority would have limited trans-rights speech if they had their way. One problem is that, of course, society could become more conservative as time goes on, or various incentives could push platforms to enforce certain marginalizes speech that most people simply give no shit about.

 

If you want an example that is relevant today in parallel to trans-rights, just take consanguinamory rights as an example. Most people think advocating for such things is dangerous, obscene, perverse, enabling abuse and so forth. Various social media platforms will limit visibility even if you just mention a certain keyword related to the topic. This makes rights advocacy extremely difficult in the environment we already exist in, an AI that would be tasked with "preventing dangerous speech", could easily make it impossible to ever talk about the topic in an organized manner that has any impact on society. And the vast majority of people, including progressives, wouldn't mind this at all, because they all share the same perspective that enabling such things is dangerous and wrong. Nobody will leave a platform because "incest rights" discussions are banned from it, to them it is no different from banning speech that advocates in favor of rape.

But this could easily spill over to other things that are illegal, like drugs usage. There would be no clever ways to avoid the system with AI that was genuinely intelligent and could read subtext. Drug usage is a less controversial topic today, so most people would be bothered if it was being targetted, but it just tells you how relative everything is to the time we live in.

There are various other controversial topics that are similarly controversial, that are increasingly difficult to be discussed even with human moderators around. 

 

In the following years, we will see an increasing danger from allowing unfiltered free speech, which will make people more open to the idea of censoring and regulating some type of speech. If the regulatory mechanisms are too efficient once that happens, it might very well resemble the "no crime world" which we all agree would be terrifying.

You think you want perfect law enforcement, but you actually do not. A world in which every law was perfectly enforced would be a terrifying world, and the same applies to speech.

Edited by Scholar

Share this post


Link to post
Share on other sites
5 minutes ago, Elliott said:

It would ban leo

It would ban the whole website.

 

Share this post


Link to post
Share on other sites
6 minutes ago, Elliott said:

It would ban leo

I laughed way harder at this than I should have.


Connect with me on Instagram: instagram.com/miguetran

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now