The Creativity Crisis:
What Happens When Content is Too Easy to Make
The internet is overflowing with AI-generated content for good reason: generative AI tools are powerful, accessible, and deliver undeniable productivity gains. I use AI tools all the time.
But like any disruptive technology, there are trade-offs. When it becomes THIS easy to create, taste and quality can take a back seat to volume and scale. With minimal effort, anyone can spin up autonomous agents that churn out content at a staggering rate - completely unsupervised.
The result? A surge of low-effort, low-context content (commonly known as “AI Slop”) making its way into our feeds.
The Real Risk:
Beyond the noise, my biggest concern is misinformation. Recklessly generated (or intentionally misleading) content can spread quickly. Less experienced users (especially seniors) are vulnerable. It’s the modern-day version of spam - but harder to spot.Platform Responses
Platforms find themselves in a state of catch-up, trying to address the fallout with things like:
- Detection tools
- Disclosure requirements (AI-generated content labels)
- Policy updates regarding monetization of AI-generated content
But the truth is, a lot of AI content is already becoming indistinguishable from reality for both humans and machines, and it’s only going to get better.
My Take:
I’ve worked in digital media for 30 years - and in AI for more than half of that. I’ve had a front-row seat for these waves of disruption.Creating great content used to be hard - it was time-consuming, technical, and expensive.
Now? Creative minds with ideas, storytelling instincts, and curiosity have been handed a really powerful toolkit.
Like every major tech shift, generative AI is a great equalizer. Today, high-quality content is easier than ever to produce. Photoshop revolutionized how content was made, but it didn’t change how it was perceived. Content will always be subjective, no matter how it’s created.
In this new era, taste, timing, context, and curation matter more than ever. Whether it’s human-made or AI-assisted, the next wave of relevant content will be defined by its ability to:
- Spark curiosity
- Capture attention
- Genuinely engage
- Tell stories
Yes, there’s a lot of slop right now. But slop doesn’t stick. It’s noise.
Quality still matters - the bar is just in a different place now.
AI for People Who Don’t Like AI: Start Thinking “Digital Assistant”
I get it. The hype can feel overwhelming, and the concerns are real.
But what if we stopped calling it “artificial intelligence” and started thinking about what most of these tools really are: DIGITAL ASSISTANTS that can…
• Boost our productivity
• Help with writer’s block
• Handle routine administrative tasks
• Brainstorm ideas and approaches
When framed this way, it’s easier to integrate into your workflow (and also set boundaries).
Your digital assistant:
• Doesn’t make decisions. It can draft an email or organize information, but you’re still the one leading, thinking, and strategizing.
• Doesn’t know your job like you do. It can format a proposal or clean up your data, but you’re still creating the vision and drawing the insights.
• Works when you need it. Always on standby, ready when you’ve got something to delegate.
My recent work helping teams with AI literacy has been about breaking down these barriers - helping non-technical people realize they don’t need to “learn AI.” They just need to delegate routine parts of their day without feeling intimidated or overwhelmed.
It doesn’t always have to be revolutionary. More often than not, “practical” is the best place to start.
The AI Identity Crisis:
Platform Decay, Model Collapse, and Data Pollution"
Platform decay" is a reference to the decline of the quality of a service or platform.
There are lots of reasons it happens, but in tech, it usually comes down to this: user experience gets sacrificed in favour of other priorities, like ad revenue, automation, or growth at any cost.
In the world of AI, the equivalent risk is something called model collapse.
Here’s the concern: AI models are now starting to train on content that was generated by previous models (or other AI systems) - essentially, a copy of a copy. This leads to “generational loss,” where outputs gradually lose grounding in real, original data.
Most large AI models are trained on huge datasets scraped from all over the internet.
But now, more and more of what’s on the internet is being generated by AI tools (more specifically, people using AI tools) - content, images, code, videos, etc.
So we’ve entered a strange loop: AI tools are creating the content that trains the next generation of AI tools. Over time, the signal gets weaker, the noise gets louder, and the outputs drift further from reality.
This is what many are now calling “data pollution” - a flood of synthetic information that’s hard to trace, hard to filter, and easy to mistake for something that may have been human-generated.
What’s the solution? There’s no magic fix.
But here are a few of the ways researchers and developers are trying to prevent collapse:
- Detection & filtering: tools that identify and flag synthetic data before it’s used in training.
- Control datasets: validated sets of human-authored content used as benchmarks.
- Cumulative training: building on trusted data over time, rather than starting fresh with newly scraped data every time a new model is trained.
- Human oversight: expert review, reinforcement learning, and feedback to keep things grounded.
This space is still evolving fast. It’s the Wild West right now.
If AI is going to be useful long-term, we need to keep it anchored in reality.
Time will tell if we can pull it off.
Offload the drudgery.
Think AI is only for complex tasks or enterprise workflows? Think again.
A ton of value can be realized from simple, repetitive, energy-draining stuff, like:
- Drafting customized follow-up emails
- Formatting, or summarizing reports
- Extracting key info from documents or spreadsheets
- Rephrasing the same info for multiple audiences
These tasks aren’t glamorous, but automating them saves time, boosts morale, and frees up your brain for actual thinking.
AI isn’t here to replace you. It’s here to offload the drudgery.
And no, you don’t need to be a tech-wiz to start.
If your team is stuck hours doing boring, repetitive work, let’s talk about how AI can quietly and efficiently take care of it - without overhauling everything.
Ever had an AI chatbot give you a super-confident answer that was… completely wrong?
That happens pretty frequently, and it has a name:
AI hallucinations: when an AI generates content that sounds accurate, but isn’t.
Your chatbot isn’t trying to lie. It’s doing what it was trained to do: predict the next likely word based on patterns from huge datasets.
The result? A confidently worded guess that could be completely wrong.
This isn’t technically a glitch or a bug, but it exposes a fundamental limitation of how large language models work. These tools are incredibly powerful but they have clear limitations. A big part of AI literacy is understanding these limitations.
The solution?
- Show teams how to use AI critically, not blindly.
- Build workflows that include human oversight.
- Promote AI literacy, not just AI access.
It’s one of the earliest examples of AI quietly working in the background, protecting you every day for more than two decades.
Behind the scenes, your email spam filter was using good old fashioned AI, powered by a mix of:
- Rule-based logic
- Pattern recognition
- Keyword detection (common spam words and phrases)
Eventually, machine learning was layered in, providing the ability to improve based on behavioural signals (like what you mark as junk) and to better detect the structure and wording of spam messages.
…all long before ChatGPT ever hit your radar.
AI doesn’t have to be flashy to be effective. Sometimes, it just keeps your inbox clean.
Good Old-Fashioned Artificial Intelligence (GOFAI - the kind built on rules, logic, and decision trees) is still incredibly effective for many business problems.
Why?
Because it provides features that today’s black-box models often can’t:
- Total control
- Full transparency
- Predictable outputs
If you’re automating decisions or need to deliver consistent, reliable experiences, GOFAI just might be the smarter choice. No AI hallucinations. No surprises. Just smart, stable systems that do exactly what you need them to.
GOFAI - it's not always sexy but it works.
Remember Clippy?
That wide-eyed paperclip in MS Office that that would ask you if you were writing a letter? (Yes, I'm using the term "help" generously.)
Clippy was a form of AI. Not by today's standards, but what they call "good old fashioned artificial intelligence" (GOFAI). It didn't learn or adapt, but it was still AI: it was based on rules and predictive algorithms that tried to interpret what you were doing and offered to help.
Modern AI assistants are smarter and faster, but they owe a nod to that annoying little paperclip from 1997. We all do. Want to start integrating AI into your workplace? Need to leverage AI to become more productive and competitive? Don't know where to start? We can help - we've been doing exactly that for 17 years, helping clients solve real problems with AI.