Why We Wrote an Open Source AI Framework For the Industry

John Harden • March 25, 2026

Share this article

Write about something you know. If you don’t know much about a specific topic that will interest your readers, invite an expert to write about it.

One of the reasons AI feels so messy in the MSP world is simple. There isn’t a real framework. 

Not a shared one. 

Not a practical one. 

Not something people can actually ground decisions in. 

 

What exists instead is a mix of vendor narratives, half‑borrowed security models, and a lot of well‑intentioned guesswork. Everyone is trying to build structure at the same time they’re trying to figure out what AI even is in their business. 

 

That’s a hard way to operate. 

 

Most of what I see today isn’t really a framework. It’s paperwork layered on top of uncertainty. An attempt to look organized before there’s anything stable underneath it. 

 

And that’s not a knock on effort. It’s just what happens when there’s nothing solid to anchor to. 

This is actually why I ended up writing the Lemhi AI framework at all 

Not because I wanted to introduce another abstraction, but because there wasn’t one to start from. There was no common language. No baseline for what “good” even looked like. No way to evaluate tools without starting from scratch every time. 

 

Everyone was picking tools first and trying to justify them later. That’s backwards. 

Without a framework, every AI decision feels heavyweight. Every new tool creates debate. Every customer conversation turns into a custom explanation. And every internal discussion becomes philosophical instead of practical. 

 

A real framework does the opposite. 

It gives you a place to stand. 

It makes tradeoffs obvious. 

It lets you evaluate tools against something instead of reacting to them emotionally or defensively. 

Once we accepted that a framework was needed, the next decision was obvious. It had to be open


If this lived behind a product, a paywall, or a consulting engagement, it would immediately lose credibility. It would feel like positioning instead of structure. Another opinionated take instead of a shared starting point. 


That was the opposite of the goal. 


The intent here is not to “win” the AI framework debate. It is to start it and open it to the community. 


Open source forces discipline. Anyone can inspect it. Anyone can challenge it. Anyone can fork it. If something does not hold up in the real world, it gets exposed quickly. That is a feature, not a risk. 


It also keeps the framework honest. The moment it turns into a sales asset, it stops being useful as a control system. MSPs already have enough vendor shaped narratives telling them how AI should work. They do not need another one. 


So we gave it to the community and have decided to own changes. My take on it? You do not need to believe everything in it. You just need a place to stand. 


If you have spent time in cybersecurity, the structure will feel familiar. That is intentional. 


CIS works not because it is perfect, but because it respects how organizations actually adopt things. It recognizes that maturity is staged. That not every control matters on day one. That sequencing matters more than ambition. AI adoption follows the same pattern. 


There is a massive difference between “we are experimenting” and “this is now part of how work gets done.” Treating those two states the same is how organizations either freeze or move too fast. 


So instead of inventing something new, we copied the part that already worked. 


What can you expect? 


Implementation Groups. 


IG1: Baseline – What must exist before AI is considered real 

IG2: Scale – What prevents drift as adoption grows 

IG3: Advanced – What only matters once AI is embedded into sensitive workflows 

This is not about slowing teams down. It is about giving them permission to start honestly where they are. 


Pillars 


Pillars are not categories for organization. Each one maps to a failure mode we kept seeing in real environments. 


Most AI problems are predictable. Missing ownership. Unclear data boundaries. No visibility. No rollback path. Pillars force teams to confront the parts they usually assume away. 


What Each Pillar Represents 


Each pillar answers a different “what breaks if we ignore this” question: 


Strategy & Buy‑In – Who owns AI and why it exists 

Policy & Governance – What is allowed, what is not, and how exceptions work 

Technical Readiness – Whether the environment can actually support AI 

Process Mapping – Where AI fits into real work, not demos 

Data Security & Tagging – What data AI can see and what it never should 

AI Observability – Whether usage, cost, risk, and quality are visible 

Copilot Readiness – How Microsoft Copilot expands safely and deliberately 

AI Tooling & Deployment – How pilots become production without chaos 

Skipping a pillar usually shows up later as noise, risk, or rework. 


What’s Inside Each Control 


Every control is written to be executable, not theoretical. 


Each one includes: 


A clear objective 

A concrete requirement 

A defined cadence 

A named owner 

Evidence you can actually produce 

Controls are not pass or fail judgments. They are orientation points. They tell you what matters now, what can wait, and what you should not skip. 


The point of the framework is simple. 


AI should feel boring once it is working. Owned. Governed. Measured. Improved over time. 


If it does not, something upstream is missing. 

Recent Posts

A person in a professional setting works at a desk with multiple monitors, with the text
By John Harden March 18, 2026
The more conversations I have with MSPs about monetizing AI, the less convinced I am that their biggest problems are technical. They FEEL technical...
A man stands beside a graphic of a human brain half-fleshed and half-circuitry, next to a muted speaker icon.
By John Harden March 11, 2026
I’m not anti‑innovation. I build things for a living. I like new ideas. I like progress. I like when technology actually moves the ball forward. What I’m against is noise...