DESKTOP
1440 <
991 >
767 >
478 <
VMP 30th Logo

BLOGS AND VLOGS

30 years in business is no small feat. Check out Shawn's new video series:
30 Years - Confessions of a Serial Entrepreneur
Also check out his mini-blog series on AI.

November 2025:

Your first AI project should be as boring as possible.

Not mind-blowing.
Not futuristic.
Not “change the world” scale.

Boring.

Why? Because boring is where the real pain points live:
* The tedious tasks everyone avoids.
* The reports no one wants to write.
* The repetitive stuff that clogs up your workflow.

They aren’t flashy.
They won’t make headlines.
But they will get buy-in.

The FASTEST way to build trust in AI? Make someone’s day a little easier.

So yes, dream big. But start small. Solve real problems.

Start boring.You’ll be surprised how quickly “boring” turns into “why didn’t we do this sooner?”

#AI #AIatWork #BeBoring

October 2025:

The AI stories just keep rolling in.

Here are two recent “gotcha” moments that give even the most tech-forward organizations reasons for pause:
- OpenAI’s CEO warned that ChatGPT conversations aren’t legally protected and could be used in court.
- Earlier this summer, thousands of users’ AI chatbot conversations were exposed to Google and other search engines due to experimental link-sharing features.

These aren’t hypothetical risks - they’re real. For organizations dipping their toes into AI, they definitely add to the anxiety around AI adoption and implementation.

What These Headlines Mean for You:
- Privacy isn’t automatic. I tell all of my clients that sending info into a third-party chatbot always comes with risk.
- Your content can outlive your intentions. Things shared in what you thought was a private session could surface elsewhere.
- Regulation is far behind. Things are changing so fast that regulatory frameworks may never fully catch up.

What Smart Organizations Do Differently:
- Clear usage policies. Clarify what can and cannot go into an AI chat.
- Educate users. Not everyone understands the risks. Help team members develop AI literacy and set up clear guardrails.
- Choose tools wisely. Choose platforms with transparent privacy, data, and retention policies.
- Drop the “plug-and-play” mindset. Consider hosting your own AI models to reduce exposure and retain control.

At the end of the day, these headlines aren’t “doom and gloom” - they are reminders. If you approach AI thoughtfully, you don’t have to gamble.

September 2025:

Image representing "AI Slop" - a cute robot pouring slop into a computer.

The Creativity Crisis:
What Happens When Content is Too Easy to Make

The internet is overflowing with AI-generated content for good reason: generative AI tools are powerful, accessible, and deliver undeniable productivity gains. I use AI tools all the time.

But like any disruptive technology, there are trade-offs. When it becomes THIS easy to create, taste and quality can take a back seat to volume and scale. With minimal effort, anyone can spin up autonomous agents that churn out content at a staggering rate - completely unsupervised.

The result? A surge of low-effort, low-context content (commonly known as “AI Slop”) making its way into our feeds.

The Real Risk:
Beyond the noise, my biggest concern is misinformation. Recklessly generated (or intentionally misleading) content can spread quickly. Less experienced users (especially seniors) are vulnerable. It’s the modern-day version of spam - but harder to spot.Platform Responses
Platforms find themselves in a state of catch-up, trying to address the fallout with things like:
- Detection tools
- Disclosure requirements (AI-generated content labels)
- Policy updates regarding monetization of AI-generated content

But the truth is, a lot of AI content is already becoming indistinguishable from reality for both humans and machines, and it’s only going to get better.

My Take:
I’ve worked in digital media for 30 years - and in AI for more than half of that. I’ve had a front-row seat for these waves of disruption.Creating great content used to be hard - it was time-consuming, technical, and expensive.

Now? Creative minds with ideas, storytelling instincts, and curiosity have been handed a really powerful toolkit.

Like every major tech shift, generative AI is a great equalizer. Today, high-quality content is easier than ever to produce. Photoshop revolutionized how content was made, but it didn’t change how it was perceived. Content will always be subjective, no matter how it’s created.

In this new era, taste, timing, context, and curation matter more than ever. Whether it’s human-made or AI-assisted, the next wave of relevant content will be defined by its ability to:
- Spark curiosity
- Capture attention
- Genuinely engage
- Tell stories

Yes, there’s a lot of slop right now. But slop doesn’t stick. It’s noise.

Quality still matters - the bar is just in a different place now.

August 2025:

A young woman working at her computer with the assistance of a cute robot sitting on her desktop next to her.

AI for People Who Don’t Like AI: Start Thinking “Digital Assistant”

I get it. The hype can feel overwhelming, and the concerns are real.

But what if we stopped calling it “artificial intelligence” and started thinking about what most of these tools really are: DIGITAL ASSISTANTS that can…
• Boost our productivity
• Help with writer’s block
• Handle routine administrative tasks
• Brainstorm ideas and approaches

When framed this way, it’s easier to integrate into your workflow (and also set boundaries).

Your digital assistant:
• Doesn’t make decisions. It can draft an email or organize information, but you’re still the one leading, thinking, and strategizing.
• Doesn’t know your job like you do. It can format a proposal or clean up your data, but you’re still creating the vision and drawing the insights.
• Works when you need it. Always on standby, ready when you’ve got something to delegate.

My recent work helping teams with AI literacy has been about breaking down these barriers - helping non-technical people realize they don’t need to “learn AI.” They just need to delegate routine parts of their day without feeling intimidated or overwhelmed.

It doesn’t always have to be revolutionary. More often than not, “practical” is the best place to start.

July 2025:

A robot looking into a mirror and seeing a pixellated and distorted reflection of itself.

The AI Identity Crisis:
Platform Decay, Model Collapse, and Data Pollution"

Platform decay" is a reference to the decline of the quality of a service or platform.

There are lots of reasons it happens, but in tech, it usually comes down to this: user experience gets sacrificed in favour of other priorities, like ad revenue, automation, or growth at any cost.

In the world of AI, the equivalent risk is something called model collapse.

Here’s the concern: AI models are now starting to train on content that was generated by previous models (or other AI systems) - essentially, a copy of a copy. This leads to “generational loss,” where outputs gradually lose grounding in real, original data.

Most large AI models are trained on huge datasets scraped from all over the internet.

But now, more and more of what’s on the internet is being generated by AI tools (more specifically, people using AI tools) - content, images, code, videos, etc.

So we’ve entered a strange loop: AI tools are creating the content that trains the next generation of AI tools. Over time, the signal gets weaker, the noise gets louder, and the outputs drift further from reality.

This is what many are now calling “data pollution” - a flood of synthetic information that’s hard to trace, hard to filter, and easy to mistake for something that may have been human-generated.

What’s the solution? There’s no magic fix.

But here are a few of the ways researchers and developers are trying to prevent collapse:
- Detection & filtering: tools that identify and flag synthetic data before it’s used in training.
- Control datasets: validated sets of human-authored content used as benchmarks.
- Cumulative training: building on trusted data over time, rather than starting fresh with newly scraped data every time a new model is trained.
- Human oversight: expert review, reinforcement learning, and feedback to keep things grounded.

This space is still evolving fast. It’s the Wild West right now.

If AI is going to be useful long-term, we need to keep it anchored in reality.

Time will tell if we can pull it off.

July 2025:

A young woman sitting at her desk looking bewildered at all of the work she needs to complete.

Offload the drudgery.

Think AI is only for complex tasks or enterprise workflows? Think again.

A ton of value can be realized from simple, repetitive, energy-draining stuff, like:
- Drafting customized follow-up emails
- Formatting, or summarizing reports
- Extracting key info from documents or spreadsheets
- Rephrasing the same info for multiple audiences

These tasks aren’t glamorous, but automating them saves time, boosts morale, and frees up your brain for actual thinking.

AI isn’t here to replace you. It’s here to offload the drudgery.

And no, you don’t need to be a tech-wiz to start.

If your team is stuck hours doing boring, repetitive work, let’s talk about how AI can quietly and efficiently take care of it - without overhauling everything.

July 2025:

A cute robot working at its computer with swirly eyes indicative of a hallucination.

Ever had an AI chatbot give you a super-confident answer that was… completely wrong?

That happens pretty frequently, and it has a name:
AI hallucinations: when an AI generates content that sounds accurate, but isn’t.

Your chatbot isn’t trying to lie. It’s doing what it was trained to do: predict the next likely word based on patterns from huge datasets.

The result? A confidently worded guess that could be completely wrong.

This isn’t technically a glitch or a bug, but it exposes a fundamental limitation of how large language models work. These tools are incredibly powerful but they have clear limitations. A big part of AI literacy is understanding these limitations.

The solution?
- Show teams how to use AI critically, not blindly.
- Build workflows that include human oversight.
- Promote AI literacy, not just AI access.

June 2025:

A retro robot striking a pose with a "no spam" symbol on a screen in its torso.

It’s one of the earliest examples of AI quietly working in the background, protecting you every day for more than two decades.

Behind the scenes, your email spam filter was using good old fashioned AI, powered by a mix of:
- Rule-based logic
- Pattern recognition
- Keyword detection (common spam words and phrases)

Eventually, machine learning was layered in, providing the ability to improve based on behavioural signals (like what you mark as junk) and to better detect the structure and wording of spam messages.

…all long before ChatGPT ever hit your radar.

AI doesn’t have to be flashy to be effective. Sometimes, it just keeps your inbox clean.

June 2025:

A retro robot striking a pose with a simple flow diagram on a screen in its torso.

Good Old-Fashioned Artificial Intelligence (GOFAI - the kind built on rules, logic, and decision trees) is still incredibly effective for many business problems.

Why?
Because it provides features that today’s black-box models often can’t:
- Total control
- Full transparency
- Predictable outputs

If you’re automating decisions or need to deliver consistent, reliable experiences, GOFAI just might be the smarter choice. No AI hallucinations. No surprises. Just smart, stable systems that do exactly what you need them to.

GOFAI - it's not always sexy but it works.

May 2025:
You've Been Using AI for Decades - You Just Didn't Call it That

Remember Clippy?

That wide-eyed paperclip in MS Office that that would ask you if you were writing a letter? (Yes, I'm using the term "help" generously.)

Clippy was a form of AI. Not by today's standards, but what they call "good old fashioned artificial intelligence" (GOFAI). It didn't learn or adapt, but it was still AI: it was based on rules and predictive algorithms that tried to interpret what you were doing and offered to help.

Modern AI assistants are smarter and faster, but they owe a nod to that annoying little paperclip from 1997. We all do. Want to start integrating AI into your workplace? Need to leverage AI to become more productive and competitive? Don't know where to start? We can help - we've been doing exactly that for 17 years, helping clients solve real problems with AI.

30 Years: Confessions of a Serial Entrepreneur

Video 3: Cheated, Deceived, Swindled, and Ripped Off

Video 2: "Don't You Have Someone to Do That?"

Video 1: Introduction to the Series

30 Years
1995-2025

We've been working in AI, personalization, and expert systems since 2008.