The Unsexy Truth About AI and Knowledge Management
Why the invisible infrastructure nobody wants to build is still the foundation of success.
I’ve been in knowledge management for regulated industries for over 15 years, and I keep watching the same movie with different actors.
Original (2010s): “Deploy this shiny new knowledge management system! It’ll create, maintain, and deliver all your content seamlessly!”
Remake (2020s): “Use AI! It’ll revolutionise knowledge delivery, save so much time, and make your employees’ lives better!”
Same Ending (both times): “Why are we getting outdated, contradictory or missing information? Why aren’t people using this?”
The technology changes. The hype cycle evolves. But the fundamental problem remains the same.
Organisations keep skipping the governance foundation that underpins success.
We’re being sold the dream of moving from traditional KMS or CMS platforms to AI-powered knowledge delivery without ever considering, let alone trying to solve, the foundational problem. Nobody is talking about how knowledge isn’t a static set-and-forget one-time thing. It’s a dynamic, living, breathing and ever-evolving being. We should be talking about how to ensure our content is accurate, relevant and trusted. Most just want to talk about the exciting new technology that will magically fix everything. (cue “everything is awesome” theme tune)
The answer to “why isn’t this working?” is generally the same regardless of the tech solution: we skipped the boring parts.
What “Fluid Knowledge” Actually Means (And Why It’s Not Magic)
There’s a growing momentum around what some are calling “fluid knowledge”, the idea that enterprise information should flow seamlessly between humans and applications, always accurate, always relevant, always trustworthy.
It’s a compelling vision. Knowledge delivered directly into the employee’s existing applications. No more hunting through multiple repositories. No more outdated SharePoints. Just the right answer, in context, exactly when you need it, in your channel of choice.
For regulated industries, financial services, healthcare, utilities, and government, this isn’t just convenient … it could be transformative. Imagine agent guidance that stays current with compliance requirements. Self-service that doesn’t create audit risks or increased complaints. AI assistants that don’t hallucinate provide incorrect information.
But here’s the part everyone is skipping:
AI doesn’t fix bad knowledge management. It amplifies it.
Feed garbage data into a brilliant LLM, and you get an expensive, confident-sounding garbage dispenser. One that can now distribute incorrect guidance at scale, across every channel, faster than any human could catch and correct it.
The work hasn’t changed. It’s actually more critical than ever.
Knowledge Governance (The Same Boring Work You’ve Always Needed)
Before you had knowledge management systems, you needed to know:
- What knowledge you actually have
- Where it lives
- Whether it’s accurate/complete
- Who owns it
- When it was last validated/updated
Before you deploy AI, you need to know… exactly the same things.
The technology changed. The fundamental work did not.
You still need knowledge governance that actually functions.
Clear ownership and accountability – not “the KM team” or “someone in compliance,” but specific people responsible for specific knowledge domains who get notified when content needs review.
Content lifecycle management – creation, review, approval, publication, maintenance, retirement. Most organisations nail the creation part and then hope for the best on the rest.
Findability standards – metadata, taxonomy, classification schemas that make content findable and usable by both humans and AI systems. If humans struggle to find it, AI will simply fail faster.
Version control that’s enforced – not aspirational. What’s current? What’s archived? When was it validated? Who approved it? For regulated industries, “I think it’s current” doesn’t pass audits.
Access controls and compliance requirements – who sees what sensitive information, how you prevent leakage through AI delivery, how you maintain audit trails. Your regulatory obligations don’t disappear with AI.
AI response provenance and traceability – here’s the new challenge AI adds, when your system generates an answer by synthesising multiple sources, how do you prove to regulators exactly what was delivered? When an agent says, “the AI told me to do it,” how do you verify it? Traditional KM let you point to the exact article (even down to the version) delivered. AI-generated responses? You need to capture what was synthesised, from which sources, and at what timestamp. Most vendors and organisations haven’t thought about this yet. Regulators definitely will.
The operational reality? This means comprehensive audits of all your repositories (including the ones you don’t know about), eliminating ROT (Redundant, Obsolete, Trivial) content that AI will confidently cite in answers, establishing actual workflows for content review and approval, building change management protocols for when regulations shift, and implementing logging and audit infrastructure that captures AI-generated responses with full source attribution and timestamps.
Nobody wants to fund this work. It’s not exciting. But it’s the difference between AI that transforms your operations and AI that creates audit failures you can’t explain, or worse, liability you can’t defend.
I know this sounds exactly like what you should have been doing for the past decade. That’s because it is.
AI didn’t make this work go away. It made it more urgent and the consequences more severe. When your AI serves incorrect compliance guidance to 10,000 agents simultaneously, you haven’t created efficiency. You’ve created systematic liability at a scale that manual processes could never achieve.
AI Delivery (The Exciting Part)
This is where the market energy is: delivering knowledge directly into the applications where work happens. CRM systems. Service platforms. Agent desktops. Wherever your employees actually spend their time.
The AI pitch is amazing:
- No context switching
- Reduced cognitive load
- Faster time to resolution
- Better consistency across channels
And yes, this is genuinely valuable. Asking employees to leave their workflow, open a separate knowledge base, figure out what to search for, sift through results, then return to apply what they learned, that’s been a broken design for years.
But here’s what makes headless delivery riskier than traditional KM
In the old centralised knowledge portal (remember those?), employees could see if the content looked outdated. They could check the publish date. They could browse related articles to verify consistency. There was a UI that provided context and signals about content quality. They could even leave direct feedback or ask questions to the knowledge team.
With AI delivery, knowledge flows directly into employees’ workflow. That safety net is gone. There’s no UI to catch obvious errors before they reach end users. No visual cues that the content might be out of date. No easy way to browse and verify or leave feedback.
The quality of what you’re delivering matters exponentially more because there’s less opportunity for human judgment to intervene.
AI didn’t solve this problem. It moved it.
You moved from “employees might find bad content” to “AI will confidently provide bad content directly into employee workflows.”
For regulated environments where employees need to trust that guidance is current, compliant, and correct? This isn’t just a quality issue. It’s suddenly a risk management issue.
Closed-Loop Feedback (The Forgotten Part)
Here’s where things get really interesting (or scary): almost nobody is building proper feedback systems for their AI-powered knowledge delivery.
Traditional knowledge management had built-in feedback loops, even if they were imperfect.
Standard things like:
- Article view counts
- Search analytics showing what people looked for
- Thumbs up/down ratings
- Feedback/comments revealing gaps
- Support tickets that referenced knowledge articles
You could see patterns. You knew what was working and what wasn’t. Your users felt empowered to contribute to the health of their knowledge base.
AI knowledge delivery is breaking these signals.
Knowledge flows into third-party systems, AI tools, CRM platforms, agent assist applications, and suddenly you’re blind to:
- What content is actually being surfaced
- Whether it’s accurate in context
- Where gaps exist
- What’s becoming outdated
- Whether employees trust what they’re being told
Here’s the kicker
Most third-party AI platforms aren’t built by knowledge management or business people.
They’re built by developers who think in terms of model accuracy and system performance, not content lifecycle, knowledge quality signals, governance requirements, or how it impacts your EX and CX.
They’ll happily ingest your content and deliver it everywhere. They won’t tell you when it stops being trustworthy cause they’ve moved on to the next big thing.
This is the same problem we had with traditional KM systems, just with higher stakes. The difference is that AI can distribute bad knowledge faster, more confidently, and at greater scale than any previous technology.
The Uncomfortable Truth About AI and Knowledge Management
AI is a tool. A powerful, sexy, amazing one, but still just a tool.
It can help surface relevant content faster. It can synthesise information across sources. It can deliver knowledge in a more natural, conversational way. It can automate some curation tasks.
But AI cannot:
- Tell you if your content is actually accurate
- Know when regulations changed, and content needs updating
- Identify redundant or contradictory information across your silos
- Establish governance and ownership
- Create the feedback loops to maintain quality over time
All the unsexy foundational work of knowledge management? Sorry to say, it’s still needed.
Actually, more than needed, it’s now critical. Because AI will confidently amplify whatever you feed it.
Good knowledge at scale is transformative. Bad knowledge at scale is catastrophic.
The organisations succeeding with AI-powered knowledge aren’t the ones with the newest and fanciest models. They’re the ones who did the boring governance work first.
Why This Keeps Happening
I’ve watched this pattern repeat across two decades and multiple technology cycles.
The promise: New technology will revolutionise knowledge management!
The reality: Technology deployed on top of poor governance, scattered content, and broken processes.
The outcome: Expensive failures that get blamed on “change management” or “adoption”, or even the technology itself, when the real problem was skipping the foundational work.
You’re probably asking, why does it keep happening? The honest answer is that governance work is hard, unglamorous, and never really “done.” Content audits are tedious. Metadata standardisation is boring. Establishing ownership is politically messy and often impossible to enforce. Building feedback loops requires sustained operational discipline and people to maintain them.
Buying a new AI platform is easy. It’s exciting. You can name it. You can demo it. It has a launch date. Your board is ecstatic; you now have AI.
Actually fixing your knowledge? That’s a multi-year operational commitment with no exciting press release.
So organisations keep choosing the shiny new thing over the foundational work. And then wondering why it didn’t deliver on its promises.
I say let’s shift the narrative,
Governance is not the barrier to AI adoption. It’s the prerequisite to AI success!
What’s your never-ending story experience? Are you seeing the same patterns repeat with AI that we saw with earlier technology waves?
