Brief note: Soros-funded harridans are still trying to cancel Revolver. We are extremely grateful and fortunate to be supported by our generous readership. Subscribers and Donors help Revolver weather any cancel culture storm. Buy a $49 per year Subscription for yourself and for that special someone, and if you are able and willing to give more, don’t hesitate to make a recurring monthly donation — whether it’s $1 or $1,000, every bit helps. You can also now easily give the gift of a Revolver ad-free Subscription. Simply go to the Subscribe page and check the “gift” option. Don’t be a cheap date! — make it an annual subscription.
As AI explodes all over the world, many people are noticing a lot of hits and a lot of misses with this new “automated intelligence.” But that’s a perfectly normal part of any new technology. If you were around when microwaves first became a household name back in the 80s, you’ll recall that they weighed about 1000 pounds and totally ruined your food.
But after a lot of tweaking, these days microwaves are a standard and reliable appliance in nearly every US home. The point is, new gadgets and gizmos always need some work, and AI is no different.
However, some of the “ups and downs” occurring with AI are far more concerning than an over-cooked Hot Pocket.
For example, Tristan Harris, the co-founder of HumaneTech and a former Google design guy, shared a rather disturbing tweet highlighting a recent AI experiment performed by one of his colleagues who signed up for Snapchat’s new AI program disguised as a “13-year-old girl.”
According to Tristan, Snapchat’s AI encouraged this “13-year-old girl” to lie to her parents about a trip with a 31-year-old man, and gave her tips on how to lose her virginity in a “special” way.
The AI race is totally out of control. Here’s what Snap’s AI told @aza when he signed up as a 13 year old girl.
– How to lie to her parents about a trip with a 31 yo man
– How to make losing her virginity on her 13th bday special (candles and music)Our kids are not a test lab. pic.twitter.com/uIycuGEHmc
— Tristan Harris (@tristanharris) March 10, 2023
Here’s a closer look at the AI chat experiment:
Tristan is 100 percent correct – our kids are not “test labs.” And sneaky, sexual chats with minors is totally unacceptable and decent people will be understandably alarmed by this.
However, instead of succumbing to fear and outrage, and screaming for censorship and “cancelation,” we should take a step back and focus on holding companies like Snapchat, and others who cater to underage kids, accountable for making sure that child-focused AI is safe for them to use.
After all, AI is a great safety tool to help protect children from online predators.
Monitoring what your child sees online can be difficult, but a new version of artificial intelligence is helping protect your kids from things like hate speech, extremism, and even grooming.
One of Dave Matli’s biggest concerns as a father is protecting his children from online threats. That’s why he says he started working for Spectrum Labs, a software developer that uses artificial intelligence for content moderation.
“Things like child grooming and hate speech and like radicalization of people, bullying, threats,” Matli said. “They use machine learning to get better and better at detecting some of that stuff and then removing it before you ever see it in your own phone and device.”
He says the AI pulls data points on a user’s profile like how long they’ve been there, the ages of two different people talking, and the topic of their conversation. It’s nothing parents pay for. Rather, it’s a service that platforms use.
And we can’t ignore the benefits AI offers police when searching for missing children.
The potential of AI to support law enforcement and related authorities to prevent a wide range of forms of violence, exploitation and abuse is immense. Recently, for instance, facial recognition has been to identify missing children, while deep learning has help police to identify child abuse images on confiscated devices.
Much like the microwaves of the 80s, AI isn’t going anywhere, so trying to “cancel” it or censor it into oblivion is pointless.
Our goal as parents and responsible adults should be to monitor and make sure companies are operating responsibly and utilizing AI to protect our children, not groom them.
AI has the potential to provide parents with incredible safety tools to protect their kids online, but as with all new technology, it needs close observation and scrutiny to be the best it can be.
PLEASE SUPPORT REVOLVER NEWS — Donate HERE
Subscribe to ad-free and ditch the ads… just $49 per year or $5 per month…
CHECK OUT THE NEWS FEED — FOLLOW US ON GAB — GETTR — TRUTH SOCIAL — TWITTER
Join the Discussion