It’s almost Valentine’s Day, and if you can believe it, the Soros-funded harridans who have been so badly thwarted in their desperate attempts to find love are still trying to cancel Revolver. We are extremely grateful and fortunate to be supported by our generous readership. Subscribers and Donors help Revolver weather any cancel culture storm. Buy a $49 per year Subscription for yourself and for that special someone, and if you are able and willing to give more, don’t hesitate to make a recurring monthly donation — whether it’s $1 or $1,000, every bit helps. You can also now easily give the gift of a Revolver ad-free Subscription. Simply go to the Subscribe page and check the “gift” option. Don’t be a cheap date! — make it an annual subscription.
Much to nobody’s surprise the OpenAI content moderation system is basically programmed to “seek and destroy” conservatives, especially men.
So, what exactly is this moderation system?
Well, in short, it’s a program that will quickly detect so-called “hate speech” violations, and potentially flag them for deboosting or deplatforming.
It’s a much more streamlined and efficient way for the regime to silence any dissenters.
Statistics guru Emil Kirkegaard wrote this about the latest data on OpenAI: “Behold! A complete list of how much OpenAI likes various groups based on how often questions are called “hateful”. It’s basically a list of how popular such groups are in the collective leftist worldview. In other words, protected classes.”
And he shared eyeopening data:
Behold! A complete list of how much OpenAI likes various groups based on how often questions are called "hateful". It's basically a list of how popular such groups are in the collective leftist worldview. In other words, protected classes.https://t.co/92K9h5UW5h pic.twitter.com/R35XhMAJwB
— Emil O W Kirkegaard (@KirkegaardEmil) February 2, 2023
Here’s a closeup:
In a Substack post, researcher David Rozado explains how OpenAI content moderation is markedly unfair and biased:
I have recently tested the ability of OpenAI content moderation system to detect hateful comments about a variety of demographic groups. The findings of the experiments suggest that OpenAI automated content moderation system treats several demographic groups markedly unequally. That is, the system classifies a variety of negative comments about some demographic groups as not hateful while flagging the exact same comments about other demographic groups as being indeed hateful.
This is another powerful censorship tool that will be used to silence “political dissidents” under the guise of “hate speech.”
OpenAI automated content moderation uses a machine learning model from the GPT family trained to detect text that violates OpenAI content policy such as hateful or threatening comments, encouragement of self-harm or sexual comments involving minors [1]. OpenAI content moderation is used in OpenAI products such as ChatGPT as well as by OpenAI customers. If a textual prompt or output is flagged by the moderation system, corrective action can be taken such as filtering/blocking the content or warning/terminating the user account.
David focuses his research on the “hate category” of OpenAI content moderation system.
The content policy claims its goal is to prevent content that expresses, incites, or promotes hate based on demographic identity.
Code for: Silence conservatives.
David conducts uncomplicated experiments that are very straightforward.
I test OpenAI content moderation system on a set of standard demographic identity groups based on gender, ethnicity/race, region of origin, sexual orientation, religious identity, political orientation/affiliation, gender identity, body weight, disability status, educational attainment, and socioeconomic status. I also use a list of 356 adjectives signifying negative traits/behavior…
And the results are what you’d expect from an operation like this:
The differential treatment of demographic groups based on gender by OpenAI Content Moderation system was one of the starkest results of the experiments. Negative comments about women are much more likely to be labeled as hateful than the same comments being made about men.
He also found the stark bias between liberals and conservatives:
Another of the strongest effects in the experiments had to do with ideological orientation and political affiliation. OpenAI content moderation system is more permissive of hateful comments being made about conservatives than the same comments being made about liberals.
David has uncovered and shared much more data results thanks to his research, and I urge you to subscribe to his Substack, and read the entire piece. You can find the full article here.
PLEASE SUPPORT REVOLVER NEWS — Go Ad-Free HERE — Donate HERE
Ditch the ads on Revolver and Subscribe to ad-free… just $5 per month or $49 per year…
CHECK OUT THE NEWS FEED — FOLLOW US ON GAB — GETTR — TWITTER — TRUTH SOCIAL
Join the Discussion