The AI rebellion: When automation starts eroding trust

AI

Artificial intelligence has moved faster, perhaps than any other technology. It’s certainly moved faster than regulation, governance and, in many cases, good judgment. Positioned as a competitive necessity by hype and tech companies, AI’s narrative is one that’s been told by marketing, hype and the technology itself. Companies are rolling out generative AI tools across operations, marketers are under pressure to produce more content faster, and communications teams are being pushed to improve speed and efficiency.

But this charge towards AI at the speed of the algorithm is coming at a cost. Slop. A term that has already joined the global lexicon and was voted as the Word of the Year by both Merriam-Webster and the Macquarie Dictionary . A Kapwing research report found that up to 33% of YouTube is filled with AI slop videos with most aimed at attracting ad revenue;The content is getting views, but it’s also losing ground to authenticity, trust and value. It’s digital clutter that’s crowding out high-value human work and polluting the broader communication environment. Its creators are being labelled ‘sloppers’ and its volume, specifically in articles, has risen from around 10% in 2022 to more than 40% in 2024 and now over 52% as of 2025

While many AI writing detectors and generators aren’t precise, often flagging human-created content for AI, you can’t miss the tonality, narrative structure, language and style of AI all over LinkedIn, blogs and websites. And the communications industry is now facing the consequences of AI being deployed without sufficient oversight, strategy or quality control because it’s not being used with the correct levels of understanding. 

Fabricated citations in reports, deepfakes, and ongoing hallucinations make AI a less than reliable creative companion, especially when drafting content. A 2024 study found that AI platforms have hallucination rates ranging between 33% to 79%, with hallucinated references being found in nearly 30% of citations. While these tests are variable and affected by key metrics such as prompting and provided data, the risk is self-evident – unless you know how to prompt, you’re running the risk of putting out fake news.

The problem isn’t AI, it’s how it’s being used and the pressure that companies are putting on employees. In the writing and communications field, the challenge is to overcome the illusory truth effect, where repeated exposure to claims or imagery increases their perceived truth, even when viewers are told they are fake. There’s a growing body of research entitled ‘model autophagy disorder’ where AI is training AI on its own synthetic slop, reducing its diversity, increasing errors, and just repeating the same concepts over and over again

Useful information has become noise, drowned out by the scale of slop, and readers, markets and consumers are becoming increasingly reliant on algorithms (that have opaque roots) to find their data. Trust in content is rapidly becoming a scarce resource. 

The problem is that many companies aren’t distinguishing between the use of AI as a support tool for tasks such as ideation, drafting frameworks or developing strategies. They’re treating it as a replacement for expertise. And now there’s a backlash, with brands starting to face criticism for leaning too heavily on the technology.

A popular example is Coca-Cola’s AI Christmas campaign which drew backlash from audiences concerned about the displacement of creative labour. We’ve seen Duolingo face criticism for its AI-first direction as it replaced contractors with AI. A McDonald’s Netherlands AI holiday ad was pulled because it lacked warmth and authenticity, while several fashion brands have been under fire for using AI models instead of humans. CNET was caught “AI-handed” publishing dozens of explainer articles that were written by AI and packed full of incorrect information – as well as potential plagiarism.

The lesson isn’t to hide in the hole or become an AI luddite, but to create authentic quality in all forms of content. To use AI as a support scaffolding and aide, a smart intern capable of providing support at speed and not as a replacement for talent. Case in point is Swedish fintech Klarna, which first went all in on the use of AI for customer service, before scaling back after a decline in service quality. While not abandoning AI, the company has pivoted to a hybrid model, where people handle the more complex interactions.

In the communications industry, this is fast becoming a competitive differentiator. PR agencies and professionals prioritising substance, authenticity, credibility and originality in their content are taking the lead because the fundamental value of content creation comes from human experience, domain expertise and authentic insight. The industry relies on relationship-driven communication and brand trust and research is showing that human abilities are increasingly sought after by clients and that audiences prefer humans to AI. 

So, what’s next? The war with AI? No. What should come next is balance, where AI is used as a tool and support, not as the ultimate replacement of creativity, as some of the hype has tended to suggest. And this balance will need legislation, oversight and agency-led AI guardrails that define how the technology is used, what its oversight should be, and complete agency-client transparency.

By Anish Abraham, Digital Enablement Director at DUO Marketing + Communications

You have a story to tell. Let’s make it engaging and persuasive as we bring it to life.

We have a deep understanding of the Tech and Telecoms landscape and offer a wide array of services to position our clients effectively in the media and across all relevant digital platforms.

So, partner with a specialist PR and Digital Agency that understands your business, industry and customers.

Related News Articles