
Elon has beef with political correctness.
And creating any kind of safe space online.
Grok-2, X's AI language model, just provided us with more proof of that. Because it's only a week old and has already produced a surge of deepfakes that are alarming to say the least.
In the midst of advertisers leaving X in droves, it's likely this will push them even further.
Almost completely unconstrained, with little to no content moderation, early examples depicted real people and places engaged in shocking or violent situations.
How bad can it really be, though, right? Um, Trump and Kamala giving a thumbs up in an aeroplane with the burning Twin Towers visible through the windshield, bad.
Or a blood-spattered Ronald McDonald standing outside BK with an automatic rifle.
How about Mickey Mouse saluting Hitler? Yeah, that bad.
Grok also created sexual images of stars like Taylor Swift, who has been targeted with deepfakes in the past. Through a loophole, one X user created violent photos of children being gunned down by Mickey Mouse and Elon Musk.
'This is one of the most reckless and irresponsible AI implementations I've ever seen,' Alejandra Caraballo, an instructor at Harvard Law School's Cyberlaw Clinic, wrote in an X post yesterday.
Another commented on the fact that there is some moderation, but not the good kind.
'The new grok update quite literally erases...queer couples from existing in ai images. I asked it to generate:
-Elton John and his husband
and it turned them ALL cishet.'
Advertisers have been fleeing the platform in hoards due to fears over brand safety, as their ads began appearing next to racist, violent and hateful content.
Notable brands too, such as Apple, IBM, Disney and Sony all pulled their advertising on X in the wake of such content and controversial comments from Musk.
Last month, he engaged an antitrust lawsuit against a marketing trade body and a string of major advertisers. In his opinion, they're carrying out an 'illegal boycott' on the platform.
Data indicated that the company's ad revenue plummeted 55% year-over-year each month in its first year under Elon's ownership. Yikes.
I can't imagine this Grok-2 saga is going to improve that number. It appears scandal after scandal is convincing advertisers the platform is not worth the hassle.
The recently enacted AI Act includes a clause that requires the disclosure of deepfake content.
You would think that with the sudden spike in AI-generated misinformation on the web due to the intensifying election cycle, such a law would be in the works in the US. However, that is not the case (yet).
There's been a call for private companies to implement watermarks or labelling mechanisms into their AI models. But Grok has not yet put any such practices in place.
This certainly can't be doing X's reputation any favours right now.