Skip to Content
Blog202512Purrview: The Tiny AI Project That Worked—And Why Most Don’t

Purrview: The Tiny AI Project That Worked—And Why Most Don’t

December 2, 2025

Most AI projects fail.

Not because of missing technology. Not because the models aren’t good enough. And not because teams don’t try hard.

They fail because they’re chartered poorly.

They start without a clearly defined problem. Or worse, they start with technology searching for a problem.

Over the past few years, customers and prospects keep asking me the same question:

“What should we do with AI?”

My answer is always:

“I can’t tell you what you should do. But I can tell you what we are doing: using AI for everything we don’t like doing or can’t scale.” — Michael Cizmar

They nod politely, say this sounds wise, and then immediately run off to “change the world.”

They assemble a team of Spare Bears — the AEM developer and the Salesforce admin who were promoted to Senior AI Architect after completing a three-hour vendor certification — and those teams set out on their “AI Transformation.”

And then what happens?

They start an endless cycle of POCs. They sprawl. They drift. They lose focus. And the whole thing quietly dies six months later.

With all that in mind, I created Purrview.

Ironically, Purrview is the opposite of everything above.

A Real Problem, a Real Charter

I had a problem at home: one of my cats was doing something naughty, and I needed to catch which one so we could retrain it.

That was the entire charter.

I was tired of scooping the poop. I was motivated. And I wasn’t looking for an academic challenge or a wall-sized architecture diagram.

The mission fit on a Post-it note:

Detect cat → record video → save clip.

Nothing more. Nothing less.

Because of that clarity, Purrview works. Not “enterprise-scale works.” Not “press release works.”

It just works — reliably, repeatedly, and end-to-end.

“Done is better than not done.” — Michael Cizmar

Purrview is a tiny open-source Python tool that detects when a cat enters the frame and automatically records the moment.

No dashboards. No production cluster. No six-month roadmap. No “AI transformation strategy.” It doesn’t even use a “Large Language Model.”

https://github.com/michaelcizmar/purrview 

Unexpectedly, it became the perfect illustration of why most AI projects fail — and what it looks like when one succeeds.

Purrview

Why Purrview Worked

1. The Charter Was the Secret

The “project brief” was brutally simple:

  1. Detect when a cat enters the frame
  2. Start recording immediately
  3. Store the clip
  4. Run entirely on-device
  5. Make it work on a Raspberry Pi

Notice what’s not on that list:

  • The latest Vision Model
  • The LangChain or Ingestion Pipeline de jour
  • “future extensibility”
  • cross-functional stakeholder alignment
  • support for “millions of users”

It wasn’t a platform. It wasn’t “the future of home AI.” It wasn’t meant to change the world.

It was meant to capture the cat in the act…so to speak.

That clarity drove intensity and decisiveness.

2. There Was No POC

This part will irritate enterprise teams:

*I didn’t build a POC. I built the solution

In 20 years of selling and building search and now AI solutions, we’ve always felt that POCs are proof to fail. Implementations are proof to work. When you label something a POC, you subconsciously give yourself permission to:

  • Avoid finishing
  • Rescope
  • Become distracted

POC culture kills more AI projects than bad models ever will.

Purrview had no POC because I wasn’t “exploring feasibility.” Of course it was feasible — it’s a cat detector on a Pi.

The only real decision was:

Will I build it right or will I build it sloppy?

I chose the former. And that choice alone put Purrview ahead of 90% of enterprise AI initiatives.

3. No Illusions of Changing the World

This might be the most important point:

I wasn’t trying to build:

  • the next Ring camera
  • a groundbreaking AI paradigm
  • a VC-backed pet-tech startup
  • a white paper
  • a keynote demo

I just wanted something that worked.

Because I wasn’t trying to change the world, I didn’t need to:

  • boil the ocean
  • justify massive scope
  • satisfy impossible expectations
  • pretend AI is magic

The result wasn’t revolutionary. It was useful.

And that’s exactly why it shipped.

AI does not need to change the world to deliver real value. It just needs to work.

The Result: A Tiny Project With Huge Lessons

Purrview succeeded not because of the technology, but because of the discipline.

Here’s what it did right, that most enterprise AI efforts do wrong:

1. A clear, tiny charter

Not “maybe we’ll also…” Not “we could extend this to…” Just: detect → record → save.

2. Intense, focused implementation

No committees. No ceremony. Just building.

3. No POC limbo

Straight to production behavior.

4. No world-changing goals

Modest ambition → excellent execution.

5. A complete, end-to-end pipeline

Value lives in the flow, not in the model.

AI Isn’t Failing. People Are.

The lesson of Purrview is bigger than cats.

Most AI projects fail because they:

  • start with hype instead of a problem
  • lack a sharp objective
  • chase “transformation”
  • delay real implementation
  • overconfuse exploration with delivery

Purrview avoided all of these traps.

It’s a reminder that the right way to build AI is the right way to build anything that matters:

Define the problem. Stay ruthlessly scoped. Build with intensity. Ship

Do that — and even a cat detector becomes a masterclass in AI project success.

Last updated on