He’s Amplified Faked Porn of Real People, and Cutting Him off Is $Imple
Fake pictures of women nude or in pornographic situations have been around for a while. Elon Musk’s AI program Grok, and it’s easy connection to his X social media, have exponentially amplified and simplified that. He could easily choose not to.
Why should he? There was an interview recently that spells it all out very well. It was on the NPR show Science Friday. The interview was with Hany Farid, professor at UC Berkeley School of Information who has studied related issues for decades. Here I quote both his interview and his message to me when I asked more about all this.
One problem is the volume of abusively fake images. Just in recent weeks it has exploded as people unskilled in making them have discovered that with a few clicks and prompts they can make excellent ones in seconds using Grok.
Another problem is the quality. You can grab any image you have or off the net and tell Grok to put the person in some pornographic situation and it will perfectly put their face on a AI generated image while maintaining the background. So it looks like that person is in a setting they would know while doing whatever. They are so good that, in testing, people have little better odds than simply guessing whether it’s real or not.
A third problem is what it does to people. Did you have some social awkwardness in high school? Imagine if, back then, someone made a horribly embarrassing fake picture, put it on social media bound to be seen by many, and now you have to spend the day in school, while trying to deny it’s real. All while you don’t know if it was some of them who made it. And in adult life, with that kind of picture out there, will you get the job offer? The rental you’re applying for? The date through the dating app you’re trying for?
A fourth problem is it’s sometimes images of children, as reported by The Verge.
And fifth is sometimes such images of teenagers aren’t just for posting, they’re sent to the victims to extort them.
The New York Times just reported that the European Union is investigating the X platform for possible violation of their regulations on these issues. In response they say, “X limited Grok’s A.I. image creation to users who paid for premium features” and, “later expanded those guardrails, saying that it would no longer allow anyone to prompt Grok’s X account for ‘images of real people in revealing clothing such as bikinis.’ That sounds like a loophole. As if people who pay, and people who use Grok directly then post either on X or other social media can continue. Unclear but the European Union is not satisfied and is proceeding with their investigation.
The thing is there are simple fixes for this. I love when big problems have simple fixes. Those are called elegant solutions.
One of those solutions would be as simple as Elon Musk deciding he’s rich enough that he doesn’t have to allow this to happen. As Mr. Farid pointed out, “take many of the prompts that you’re seeing people put into Grok AI and try to put them into OpenAI’s ChatGPT or Google’s Gemini, and it won’t work” because those companies have simply programmed filters into their AI to refuse such requests. Obviously Grok could be programmed the same. As Mr. Farid pointed out though we need to stop simply appealing to media CEOs and hope they’ll play nice when there are solutions with more clout.
Along those lines he had several other solutions that don’t depend on Musk or media CEOs. They are conceptually simple. They’re not easy because they all have to do with money, but they are doable.
One: Apple and Google could easily declare the app a violation of their app store policies because of it being used for so much abuse, and they could refuse to carry the app. Boom! Suddenly step one, get the app, is blocked.
Two: Stop the advertising that goes with it. When these images are posted on social media a great deal of them are shown next to ads. If many of the biggest advertisers were shamed into demanding that their ads not be shown next to such images, the profit behind it would take a huge hit. In his message he noted about the advertisers, “they hold the power to effect change.”
Three: Mr. Farid noted that there are also websites, separate from Grok or X, that offer making such images as a service. Upload a picture, ask what you want it turned into, pay a fee, and they’ll make the fake for you. But did you notice a little phrase in there, “pay a fee”? How is that fee paid? Often such sites accept standard credit cards and common online payment systems. Shame those big banks and financial companies into refusing to process for such sites and there goes that system. Mr. Farid noted in his message this has actually been done before, when PornHub lost the ability to accept payments after revelations of child pornography. Fake image sites could get around the ban by accepting payment in crypto but most novices don’t know how to make raw crypto transactions. They only do it through some financial service that handles it for them. Same thing. Shame those financial services into refusing those sites. The beauty of this solution is it even applies to sites hosted in countries where there is no law or enforcement that would otherwise stop it.
So, there are simple solutions to a big problem, a serious problem that does serious damage to many people, where the only issue is a small hit to the money some big companies make. If it’s the big companies and the profits that decide the end result and the damage to people is allowed to continue, doesn’t that perfectly fit the definition of an oligarchy? Seriously, how else can one explain such a result? Bernie Sanders is being right on target.

