If it seems too good to be true, it’s probably false. Ditto if it seems too bad or too sad. Such is virtual reality in cybersphere in the age of artificial intelligence (AI). It’s manipulative.

Move over spam email, make way for “slop” social media posts. We’ve (nearly) all seen them, and most of us have been fooled at least once. Slop refers to low-quality AI-produced content, usually emotional images and tall tales that grab our attention.

They often give the initial impression of being a true story – the dog that rescues a kitten, the bride and groom overcoming tragedies and challenges, success stories against all odds, and historical tales – somehow replete with photos from before the age of photography.

Together with AI-generated videos, the sky’s the limit. I recently watched a clip purporting to show an eagle flying into a crowded train and carrying away a suitcase packed with a bomb. It’s literally unbelievable, but the comments showed that some people fell for it – unless the comments themselves were the product of a bot and not a human.

If you’ve ever seen a clip of bears bouncing on a trampoline in a back garden, did you stop to ask: How did a kids’ trampoline sustain the weight of the bears? Why was the security camera trained on the trampoline in the first place? And what made you believe something as credible as the story of Goldilocks?

Artificial Intelligence – Illustrative Image
Artificial Intelligence – Illustrative Image (credit: INGIMAGE)

Enshittification

There’s a great word that encapsulates the process: “enshittification.” Australia’s Macquarie Dictionary chose it as the Word of the Year for 2024, with good reason.

The term was coined by British-Canadian journalist Cory Doctorow in 2022 to describe what the dictionary defined as “the gradual deterioration of a service or product brought about by a reduction in the quality of service provided, especially of an online platform, and as a consequence of profit-seeking.”

Priya Bharadia wrote a piece in The Guardian last month under the catchy title: “Cat soap operas and babies trapped in space: the ‘AI slop’ taking over YouTube.” She quotes Dr Akhil Bhardwaj, an associate professor at the University of Bath’s school of management, who said: “AI slop is flooding the Internet with content that essentially is garbage... This enshittification is ruining online communities on Pinterest, competing for revenue with artists on Spotify, and flooding YouTube with poor-quality content.

“One way for social media companies to regulate AI slop is to ensure that it cannot be monetized, thus stripping away the incentive for generating it.”

Did you jump right in headfirst with those clips of cats in a diving contest? And how about those marriages made in AI heaven and hell? Among the recent viral photos was one of a groom in uniform at his wedding, standing on two prosthetic legs. He had the red beret of a paratrooper but was not wearing a paratrooper’s shirt, was inexplicably wearing suspenders, and even more curiously, he had black boots on his false feet. Believe me, no paratrooper would be seen dead in black boots rather than red combat boots, let alone at his own wedding.

Another photo purportedly showed a woman in her bridal gown leaning over her comatose fiancé in his hospital bed declaring she will never give up on him. Only AI would dress a soldier in a clean and neatly pressed uniform, complete with beret, in an intensive care unit.

There’s also the tearful double amputee in military uniform sitting in a wheelchair, declaring that this was not how she expected to spend her 30th birthday. Elsewhere on the same site, the same woman appears laughing joyously in the arms of her boyfriend.

IT’S NOT ALWAYS clear whether these stories are created to inspire or to depress the viewer. Either way, nothing good is going to come out of it. You have to step back a second and think about it, rather than reacting immediately and emotionally.

Speaking recently on Reshet Bet’s Rina and Akiva radio program, sociologist Dr. Yuval Dror noted that the images are produced because they “create engagement.” If you write a comment or hit “like,” you’ve shown interest, and the social media algorithm will bring that page back to your feed later.

The images and stories are cheap to produce and look credible at first. It’s becoming hard to tell the truth, Dror said. “I’m concerned that the minute you can’t tell what’s real and what’s fake – and you dismiss everything on the Internet with a wave of your hand – the truth loses its meaning. And that has repercussions – on democracy, on dialogue...

“There are vested interests at different levels, from the low-level attempt to sell you something, to attempts to divert discussion. There are so many lies that some people give up and say, ‘It’s all lies,’” said Dror.

Roi Shoshan of the NGO FakeReporter told the same program that there is a real threat that slop-dominated groups, with tens of thousands of followers, will be able to influence elections by spreading fake news.

Take, for instance, the deep fake video that went viral in July showing Barack Obama being arrested in the Oval Office as he sits with President Donald Trump. Some people found it amusing; others found it offensive. All should see it as disturbing. When you can’t tell true from false, it creates chaos.

A recent article in The Conversation pointed out another area that’s under attack: “Listening to Spotify? Be skeptical of that new band, The Velvet Sundown, that appeared on the streaming service with a creative backstory and derivative tracks. It’s AI-generated.”

Incidentally, the picture used to illustrate the article showed a tearful girl in a boat cuddling her puppy. The caption noted: “This AI-generated image spread far and wide in the wake of Hurricane Helene in 2024.” The hurricane was real, but the photo was fake.

The article noted that “even Wikipedia is dealing with AI-generated low-quality content that strains its entire community moderation system. If the organization is not successful in removing it, a key information resource people depend on is at risk.”

AI searches are even worse. In a moment of vanity – no columnist is without an ego – I entered my name into a Gemini search to see what the AI program said about me. At first, the answer presented readily available information from my social media profiles and The Jerusalem Post site. And then it did something shocking; it declared: “A New York Times article states she is also a Holocaust educator and a former hostage.”

It’s very unnerving to be described as a former hostage when there are still some 48 real hostages being held by Hamas and other jihadists in Gaza. I later figured out the AI program had confused me with Liat Atzili, who had been held captive in Gaza after being abducted during the October 7, 2023 invasion and mega-atrocity.

But think about the way these searches work. Israel is at a disadvantage when vast amounts of money are being invested in deliberately creating fake news, blood libels, and lies. The slop images purportedly coming out of Gaza are part of the Palestinian propaganda war, the so-called “Pallywood” phenomenon.

Some purport to show IDF soldiers committing war crimes – a quick check reveals they’re not wearing IDF uniforms, and the background is not Gaza or even a real place in the real world. Many of the photos of “starving children” were AI-generated.

There’s a plethora of pictures of cute kids clutching kittens among the ruins. Early in the war, Dr. Rafi Kishon shared a photo he found very touching, noting: “In particular, as a veterinarian, I was very moved to see a cat with five paws.”

Out of all the material that comes out of AI programs like Gemini and ChatGPT, the caveat that the information might not be true is sometimes the most reliable part.

I’M WRITING these lines ahead of the Jewish New Year with no idea what the news will be by the time it appears. I cherish my 25-hour break from news and social media every Shabbat, and if you’re looking for a New Year’s resolution, I can recommend a weekly pause to refresh human batteries.

May the new year 5786 bring us truly inspirational stories and real good news, and protect us from an AI world that’s sloppy.