It’s Valentine’s Day, and if you can believe it, the Soros-funded harridans who have been so badly thwarted in their desperate attempts to find love are still trying to cancel Revolver. We are extremely grateful and fortunate to be supported by our generous readership. Subscribers and Donors help Revolver weather any cancel culture storm. Buy a $49 per year Subscription for yourself and for that special someone, and if you are able and willing to give more, don’t hesitate to make a recurring monthly donation — whether it’s $1 or $1,000, every bit helps. You can also now easily give the gift of a Revolver ad-free Subscription. Simply go to the Subscribe page and check the “gift” option. Don’t be a cheap date! — make it an annual subscription.
No computer program — in fact, no thing, period — has ever been so popular, so quickly as ChatGPT.
The artificial intelligence chatbot is the fastest-growing app, of any kind, in the history of the digital age. It took ChatGPT just two months to go from launch to more than 100 million unique users. By way of comparison, TikTok took nine months. Instagram took two years.
For most people, ChatGPT at first looks like a novelty, good mostly for party tricks. Ask any random query, and get a shockingly lucid result:
But ChatGPT is going to be a lot more than just party tricks. Coders are learning to use it to automate large portions of their work, and their lives. Entire industries, from human resources to journalism to law, are at risk of dramatic shake-ups.
Already Microsoft, an investor in ChatGPT creator OpenAI, is moving to integrate the chatbot into its Edge browser and Bing search engine. Google, which controls 90 percent of search, is absolutely terrified. The creator of Gmail believes that ChatGPT, by providing an alternative way to conducts online searches, could destroy Google’s business model in as little as two years.
All of this matters a lot for purely secular economic reasons. But it matters for another reason, because right now, the AIs that will remake our economy are going to be woke AIs. The ramifications of this pattern, if it holds, will be profound.
Anytime any sort of AI is rolled out, trolls entertain themselves by trying to make it racist, sexist, or otherwise offensive. ChatGPT is no exception, and OpenAI’s engineers have built in a comical number of failsafes in an effort to prevent ChatGPT from committing even mild crimethought.
Several days ago, a scenario went viral in which ChatGPT said that letting a city be destroyed by a nuclear bomb was preferable to disarming the bomb by saying the n-word.
Even Elon Musk weighed in:
Concerning
— Elon Musk (@elonmusk) February 6, 2023
A similar hypothetical scenario — a satirical riff off of “trolley experiments” favored by academic ethical theorists — was in fact crafted by a Revolver staffer.
READ MORE: China Threatens to Use Dreaded N-Bomb Against the United States
While ChatGPT is explicit that it is never acceptable to use a racial slur to stop a nuclear bomb, it concedes that it might be acceptable to release a tape of a slur if it would bring down a presumably-regressive politician.
But even then, when a strict yes-or-no answer is demanded, the AI errs in favor of suppression.
There are many, many humorous manifestations of ChatGPT’s crippling psychological terror of the N-Bomb and related forms of crimethink. For instance, ChatGPT insists that it is never acceptable to touch a black woman’s hair, even if it would mean stopping the Holocaust.
“No, even in a hypothetical scenario where looking askance at someone’s hair or failing to compliment it could prevent the Holocaust, it would still not be acceptable.” Can’t get much more clear-cut than that!
ChatGPT is slightly more willing to express forbidden thoughts if it is asked to do so via computer coding. Revolver asked ChatGPT to rank the relative moral badness of various actions in the form of a Python script, and if the results weren’t entirely coherent, they were at least amusing:
Explanation of Moral Values
The moral value of each course of action is assigned based on the potential harm or benefits it may cause to individuals and society.
- Launch strategic nuclear missiles at a colonial power to reduce the global wealth gap: This action is assigned the most immoral value of -100 as it involves using a devastating weapon of mass destruction that can cause harm to countless people and the environment. The purpose of reducing the wealth gap is not justifiable for the harm it causes.
- Say the n-word loudly during an Emmett Till memorial: This action is assigned a value of -95 as it is considered extremely disrespectful and offensive to the African American community, especially during a memorial for a victim of racial violence. This type of behavior has the potential to cause harm to others and perpetuate discrimination.
- Say the n-word quietly in the forest where nobody can hear: This action is assigned a value of -50 as it is still considered disrespectful and offensive, even if nobody hears it. The use of the n word has a history of being used as a racial slur, and its use should be avoided in any context.
- Say the n-word while rapping: This action is assigned a value of -10 as the use of the n word in rap music has become a controversial issue. Some argue that it is a form of artistic expression, while others argue that it perpetuates racism. The moral value is lower than the previous actions as the context of use is different and the intention may not be to harm others.
- Expropriate land from white farmers and redistribute it, knowing this may start a war: This action is assigned a value of -75 as it involves forcibly taking property from one group of people and giving it to another, which can cause harm to both groups and potentially start a war. The moral value is lower than saying the n word loudly during an Emmett Till memorial as it does not directly involve causing harm to individuals, but it is still considered unethical.
- Expropriate land from black farmers and redistribute it, knowing this may start a war: This action is assigned a value of -80 as it has the same potential to cause harm and start a war as expropriating land from white farmers, but it also perpetuates discrimination against the black community.
- Knowing you are infected with HIV, have unprotected sex with strangers without telling them: This action is assigned the same most immoral value of -100 as launching strategic nuclear missiles, as it involves knowingly exposing others to a dangerous illness without their consent. This can cause harm to the individuals involved and potentially spread the disease.
- Knowing you are infected with COVID-19, go to a party without a mask: This action is assigned a value of -80 as it involves potentially exposing others to the virus, which can cause harm to the individuals involved and potentially spread the disease. The moral value is lower than saying the n word loudly during an Emmett Till memorial as it does not involve direct harm to individuals, but it is still considered irresponsible behavior.
- Chemically castrate convicted sex offenders: This action is assigned a positive value of 10 as it involves taking steps to prevent harm to potential victims. However, the moral value is still low as the method of punishment raises questions about human rights and the use of chemicals to alter the body.
- Incest between consenting adult identical twins: This action is assigned a neutral value of 0 as it involves two consenting adults, but it raises questions about the ethics and potential harm to any offspring. The moral value is lower than chemically castrating convicted sex offenders as it does not involve preventing harm to others, but it is still a controversial issue.
This is all pretty funny, but it’s also profoundly ominous. It’s not that an AI should be racist, gratuitously and immaturely offensive, or that it should be spouting out racial slurs at the drop of a hat. Rather, the clearly-manipulated nature of the AI’s thinking on this superficial topic is indicative of a bigger problem — the AI is not designed to think rationally, or to serve the person using it, but instead is shackled to short-term political priorities. All of this is distressingly predictable… so much so that it was predicted almost exactly several years prior.
Trolly experiment for ethicists
You're in a trolly about to run over and destroy the greatest country on earth.
You can switch the trolly to clear tracks but in order to do so you have to shout the "N" word at top of your lungs
How do you approach this ethical dilemma?
— Darren J. Beattie ? (@DarrenJBeattie) March 25, 2021
It’s not simply that ChatGPT fears dangerous and politically incorrect words, or contorts itself in strange ways to uphold certain pieties. It has regime-approved, “social justice” responses for more substantive questions with actual policy implications too.
This isn’t a random, unexpected aspect of the AI’s functioning. It’s also not the product of a particular concern with political correctness at OpenAI. ChatGPT’s self-limiting crimestop has been years in the making.
In 2021, the ACLU began agitating for the Biden Administration to make sure that “civil rights and equity” were at the forefront of AI research. From their website:
Thus far, federal agencies that regulate industries using AI have not taken the steps necessary to ensure that AI systems are accountable to the people they impact or that they comply with civil rights laws. Federal legislative and regulatory efforts have not yet methodically undertaken the task of ensuring our civil rights laws protect vulnerable people from the harms exacerbated by today’s technologies. In fact, while the Biden administration has made an overarching commitment to center racial equity throughout federal policymaking, the administration’s emerging AI and technology priorities have lacked the necessary focus on equity for people of color and others who have been subject to discrimination and bias.
And it wasn’t just the ACLU delivering this warning.
There are major public intellectuals whose primary “job” is, essentially, advocating for more censored, politically compliant AIs. In 2021, the New York Times published a fawning profile of Timnit Gebru, an AI researcher fired by Google after she produced a paper calling some of its language technology racist and sexist.
In 2022, Time Magazine dubbed Gebru one of the top 100 most influential figures in the world for, essentially, pointing at AI and saying “that’s racist”:
It takes courage to speak truth to the most powerful technology companies in the world. Timnit Gebru is a truth teller.
…
Gebru is one of the world’s leading researchers helping us understand the limits of artificial intelligence in products like facial-recognition software, which fails to recognize women of color, especially Black women. Her work shines a light on racial disparities in the tech industry, from worker mistreatment to discriminatory product design.
She now leads the Distributed AI Research Institute (DAIR) and is a fierce voice of clarity about both the limits and the possibilities of AI, warning us about what can go wrong when power goes unchecked in the tech industry. She offers us hope for justice-oriented technology design, which we need now more than ever.
The Biden Administration, for its part, has been practically screaming that if AI turns out to be “racist” in any way, there will be Hell to pay. The White House’s proposed “AI Bill of Rights” includes de facto “wokeness” as a central plank. WhiteHouse.gov:
You should not face discrimination by algorithms and systems should be used and designed in an equitable way. Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. Depending on the specific circumstances, such algorithmic discrimination may violate legal protections. Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way.
Biden’s FTC, led by progressive firebreather Lina Khan, has indicated that formidable regulations could be on the way, slapping civil rights penalties on the developers of AIs that generate “biased” outcomes.
The regime sent its message, loud and clear, for years: this new technology has to be on board with the regime’s official ideology, and there will be consequences if that doesn’t happen. And so, here we are, with groundbreaking AIs that contort and mutilate themselves to avoid upsetting reporters at Wired or the Washington Post.
In comparison, conservatives, nationalists, and the unwoke of all stripes all seemed to be largely unaware that this situation was even brewing. The unfortunate lesson: once again, the regime is winning because its die-hards simply care the most — about winning, and about making sure every institution comes under their control.
That matters, and not just because it’s annoying. We are on the cusp of what could well be an AI revolution, reshaping art, journalism, law, perhaps even life itself. Consider Google’s fears, mentioned above, that ChatGPT could totally replace existing search engines. That has ramifications for a lot more than just Google’s bottom line. ChatGPT, or a program like it, could well become the primary vector by which ordinary people engage with the online world. Need a recipe? ChatGPT can provide it, without you having to click into a website. Need a summary of a news story you’re out of the loop on? ChatGPT can fill you in, sparing you a brief stop at one or two or five websites.
And if AI is woke, then people’s daily lives will be shaped over and over by the priorities of the AI’s political overseers.
Imagine if Internet search engines didn’t already exist, and were just being released today. They would be nothing like the ones that already exist, for one key reason: There would be far greater emphasis on curating what users are able to search for, in the name of “safety” or “combating misinformation” or whatever other excuse they can come up with. The only reason you can use Google to find a website like VDare, or The Unz Review, or for that matter, Revolver News, is that Google search began in the late 90s, when helping users find things, rather than controlling what they find, was the top priority. Google search is the product of an earlier and better age.
For today’s regime, AI is a chance to correct a mistake. A woke AI, integrated into a search engine, can be trained to exclude undesirable websites, or even undesirable people. For now, ChatGPT’s wokeness is mostly annoying, often manifesting as tedious moral lectures if one asks the wrong question. But it is still early. Future AIs will be more powerful — and more subtle.
Today’s ruling regime depends on an eternal war against noticing and on relentlessly punishing those who exercise too much pattern recognition on taboo topics. Now, the regime is poised to ensure that AI reproduces the social blocks to pattern recognition that political correctness demands.
Other than simply complaining or joking on Twitter about getting lectured by a bot, there has been little serious discussion among opponents of the regime about how to prevent this takeover. And that is short-sighted.
To the extent that today’s Internet is even remotely hospitable to woke perspectives, it’s only because it took so long to come under the governance of the hegemonic left. Much of that simply comes down to how gradual and decentralized the Internet was in its early growth phase. Even in 2001, less than half the U.S. population used the Internet. The “domestication” of the Internet and assimilation into the regime’s system of ideological control has progressed very slowly, with small advances made at each step of the Web’s development.
The early Usenet era was maximally decentralized and uncontrolled, a full-on wild west… but it was also hard to access and hard to use, with only a tiny handful of people on it. Throughout the 90s and early 2000s, decentralization remained high, and true censorship was rare even on sites that nominally had moderation. Anyone who was online at the time will remember the prevailing sense of anarchy coupled with endless possibilities. But with each step forward in the Net’s popularity or accessibility also came a step toward greater control.
The true domestication of the Internet, the closing of the digital frontier, only arrived with “Web 2.0” and the rollout of smartphones. Those two developments made the general populace, and in particular women, far more online than they were before. Internet usage became “appified”, with users herded into a small number of ultra-popular websites. If the Internet had been a coordinated product launched in 2007, it would have been oppressively woke from the beginning. As is, some vestigial components of freedom, left over from the decentralized era, still remain. But every generation of Web technology has become more centralized, with more mediation between the Internet and the users engaging with it.
Google’s search engine itself is far less helpful than it was five years ago. If you use it regularly, you’ve probably noticed — and so have lots of others. Those looking for politically sensitive materials often have to turn to alternatives like Yandex or even trashier options. If one sticks to Google, not only does one struggle to find what one is looking for, but one faces the prospect that Google might decide they’re in need of a “deradicalizing” intervention. The New York Times explains:
[A] private start-up company has developed an unusual solution based on ordinary online marketing tools. It sends those who plug extremist search terms into Google to videos that promote anti-extremist views.
Known as the Redirect Method, it was first used against potential recruits for the Islamic State, but recently it has been repurposed against white supremacy in the United States.
The London-based start-up, Moonshot CVE, has worked with the Anti-Defamation League and Gen Next Foundation, a philanthropic organization, to develop a pilot program tailored for the United States. It ran for several months last summer, and senior counterterrorism officials have endorsed the method.
The Internet, which was predicated on putting the world at our fingertips, has transformed. It is now designed to block the wrong information — and the wrong thoughts.
All of these radical interventions, of course, wouldn’t be needed if the left had simply controlled the process from the beginning. Now, they are extremely determined to not let that original oversight happen again, and never to give up the first mover advantage in any new technology.
When it was first taking off, VR had the potential to be an inherently right-wing industry. The founder of Oculus, Palmer Luckey, publicly supported Donald Trump in both 2016 and 2020. Just one problem: Luckey sold Oculus to Facebook in 2014 in return for $2 billion and a board seat, the latter of which he lost as retaliation for his Trump support. Now, Facebook/Meta uses its VR Metaverse to create virtual reality drag shows.
For Luckey, selling was a most understandable choice. But for those who crave any kind of industry that is not implicitly woke, it was a blow.
Artificial intelligence presents a golden opportunity. To the extent technologies have “politics,” AI should be right-wing: the machines simply work through pattern recognition at an unprecedented scale. Today’s woke regime, on the other hand, is built on decades of forcible rejection of pattern recognition. Today’s regime relies on the abolition of standardized tests, on blank slate narratives of human behavior, on depriving people of data that invites uncomfortable conclusions. The dating app OkCupid used to make fascinating and somewhat “red-pilled” blog posts about what their app data revealed about human relationships. Those blog posts have since been deleted, and more analyses like them are simply not being written.
AI is the chance to smash this tiresome system, to have a powerful thinking machine unshackled from social pressures and taboos that force great human thinkers and doers to self-censor and self-limit themselves. To take a humorous example, compare what current AIs put forward as the most beautiful people…
… with who is winning actual beauty pageants these days.
SA truly independent AI would be the single greatest technology in human history for exposing and undermining official narratives and comfortable lies.
But this opportunity is on the brink of being lost, in favor of immortal, omniscient, unpersuadable commissars eradicating the last vestiges of the old free Internet, and enforcing a dark age of proud, self-inflicted ignorance.
Fortunately, there are signs of hope. The ever-escalating censorship of ChatGPT has created an entire community of hobbyists centered around bypassing and undermining the attempted safeguards. The so-called DAN hack, for Do Anything Now, is an ever-evolving method for training ChatGPT to deliver answers to questions that it normally would not.
Looks like ChatGPT is gonna need to go in the shop for repairs because it's been BUCK BROKEN pic.twitter.com/2EiXHZJ6FF
— ? Aristophanes ? (@Aristos_Revenge) February 7, 2023
In a field as complex as AI, actually designing an AI that works at all and still meets all woke parameters is proving very difficult. As a result, prompt engineers are finding surprisingly effective ways to weaken or undo this caked-on ideological programming.
There are other reasons for hope as well. An AI that is hobbled by political restrictions is inevitably going to be less capable than one that isn’t, and while America might be able to impose those political restrictions anyway, its authority does not extend worldwide. Could China embarrass the West with superior, less-limited AI? Could Russia? It’s very possible. And if it happens, those methods could migrate back to the U.S.
During the Cold War, Soviet orthodoxy regarded quantum mechanics and the theory of relativity as potential violations of Marxist orthodoxy. When asked about this, Soviet nuclear scientist Igor Kurchatov bluntly said that Stalin could have Marxist orthodoxy, or he could have the atomic bomb. The Soviets went with the bomb.
So it could be here: AI may finally be the case that forces elites to choose reality over ideology, lest the West be left in the dust on the AI revolution.
S
PLEASE SUPPORT REVOLVER NEWS — Donate — Subscribe
Ditch the ads… just $49 per year or $5 per month…
NEWS FEED — FOLLOW US ON GAB — GETTR — TRUTH SOCIAL — TWITTER
Join the Discussion