On Institutional Knowledge and Metabolism
In my first post, I suggested that AI acceleration is pushing SaaS organizations into a kind of structural molting. Roles blur. Boundaries soften. Execution compresses. Institutions shed shells that no longer fit the pace of capability now at the fingertips of their talent.
But structure is only one part of the story.
Beneath structure there are forces that are more difficult to characterize and quantify: memory, intuition, and what many organizations refer to as institutional knowledge (or “tribal knowledge”). These are the components of what often shows up as “good business sense” in practice.
If organizations are going to operate under continuous technical acceleration, the question is not only how they reorganize. It is how they remember, persist intuition, and apply tribal knowledge. Without developing mechanisms to continuously accrete these types of knowledge into LLM context or training, talent using AI tools operates increasingly in knowledge silos - making discrete decisions or delegating discrete decisions to agents that, in the aggregate, may erode historical advantages conferred by decisions informed by institutional knowledge.
When architected or applied poorly, AI can fundamentally erode institutional knowledge as reliance on agentic intermediaries increases.
However, when architected and applied right, AI holds the promise of significantly optimizing the management, preservation, and diffusion of organizational knowledge. Increasingly sophisticated agentic memory management and context injection tools can help ensure that knowledge doesn’t remain siloed, even when applied in more autonomous settings. I think AI will also play a major role in both knowledge discovery and continuous characterization - or surfacing knowledge that appears to be emerging from patterns of interaction or large data sets that historically may have gone unnoticed or required significant investment in post-hoc data science analysis.
Importantly, like my previous note on molting, the key will be capturing, analyzing, and constantly refining over time the structures supporting how organizational knowledge can be both captured and diffused through AI. It must emerge from everyday work and the systems we use, not from something that feels clunky or artificially imposed.
This manifests itself in important ways - there has already been a shift to a greater reliance on AI for decisionmaking in nearly all areas where people are making decisions in organizations. As I mentioned in my previous post, this shift was largely organic - not mandated. On-demand super-intelligence became simply easier to access and involved less friction. Anecdotally, this pattern appears to have accelerated as reasoning-focused models improved output quality and reduced iterative friction.
In practice, this inserts a new intermediary, AI, between traditional mechanisms of shared intuition and decision-making. Conversations, meetings, and Slack threads have historically routed experience directly between people and teams. Increasingly, that routing passes first, or entirely, through an AI model. At the operator level, it is simply more efficient, and that efficiency is part of the risk.
So one of the questions becomes:
How does historical and evolving institutional knowledge disperse into new forms of decisionmaking within new forms of organization?
To see why this matters, it helps to look at examples of where institutional memory actually lives. I’ll provide examples in a context that I’m familiar with: Food and Beverage manufacturing.
Procurement
In the context of procurement, institutional knowledge has long resided in personal networks. A seasoned buyer’s “rolodex” was never just a list of suppliers. It encoded judgment. Who delivers under pressure. Who flexes on minimum order quantities. Who quietly substitutes spec-adjacent materials. Who passes audits but struggles when timelines tighten. Much of this never entered a formal system. It lived in accumulated experience.
At TraceGains, we built platforms like Gather Marketplace to try to formalize elements of the “rolodex”. Supplier documentation practices, performance signals, network visibility, capturing conversations, creating shared workspaces for shared analysis - all of these reduced reliance on purely personal recall.
These tools were, in part, mechanisms to support institutional knowledge gain and retention, but were never meant to replace it. Institutional knowledge emerges from accumulated personal and team experience interacting with evolving market dynamics. Traditional SaaS platforms can only indirectly capture and surface fragments of this.
Procurement offers one lens, but formulation reveals the same dynamic.
Formulation
In food science, institutional memory often takes the form of knowledge derived from long histories of trial and error. A stabilizer that failed under shear in 2017. A protein substitution that technically worked but altered mouthfeel enough to trigger consumer complaints. A reformulation rejected not for technical reasons, but because it conflicted with a customer’s unspoken preference.
The spreadsheet records the outcome. The scientist remembers the rationale. That memory is critical to reduce re-work and costly R&D development errors.
Even when documentation exists, the layered reasoning behind acceptance or rejection is rarely captured in full. The “why” remains distributed across people, email threads, and informal discussions… In food science, in many cases literal paper notebooks.
Quality and safety
Quality leaders carry what might be called institutional texture. The auditor who fixates on supplier verification. The near miss that almost became a recall. The supplier whose documentation is technically compliant but consistently late. These experiences shape escalation thresholds and risk posture long after the event is archived.
Such knowledge is not easily reducible to fields in a database.
Historically, organizations have distributed institutional knowledge through people rather than silicon systems. Some information is encoded in databases, documentation, or conversations. But the deeper layers… the second- and third-order interpretations about what those facts mean, when they matter, and how they interact… tend to accumulate organically through lived experience. Even where formal data science insights exist, they typically inform institutional judgment rather than replace it. The interpretation, prioritization, and contextual weighting of information still emerges through human interaction over time.
There is a concept in organizational psychology known as transactive memory (a concept I encountered while researching this post). Very briefly, transactive memory asserts that teams do not store all knowledge within each individual. Instead, they maintain a shared understanding of who knows what. One person understands allergen regulation. Another remembers the failed co-man trial from five years ago. Another knows which supplier consistently underestimates lead times.
The strength of the organization lies not only in expertise, but in the reliability of this cognitive routing system.
AI complicates this arrangement by partially or entirely disintermediating traditional cognitive routing systems.
As I posited in my previous blog post, this disintermediation is happening whether organizations want to acknowledge it or not. So, accepting that, the question becomes how to develop cognitive routing systems that include AI.
When models can retrieve, summarize, and synthesize institutional data, they begin to occupy space within that transactive memory network. The routing question subtly shifts. It is no longer only “Who knows this?” It becomes “Does the system know this?” And more importantly, “When should we trust it?”
To understand how AI affects organizational metabolism, we need to be more precise about what organizational memory actually is and where it resides.
Organizational memory is not a single thing. It has structure.
Some memory resides within individuals. This is experiential knowledge — the buyer who senses when a supplier’s tone signals risk, the scientist who remembers how a stabilizer behaved at scale, the quality lead who recalls the internal debate that nearly escalated into a recall. This form of memory is embodied and interpretive. It rarely appears in full documentation.
Some memory resides in systems. Databases, specification repositories, audit archives, supplier documentation portals. This is structured, queryable memory. It persists beyond tenure. It scales. It reflects what the organization chose to formalize.
Some memory resides in networks. The relationships between teams, suppliers, customers, and regulators. These networks encode knowledge that no single document contains. Knowing who to escalate to. Who can unblock a stalled project. Who interprets regulatory nuance conservatively.
And then there is a more diffuse layer: pattern memory. Observations that have not yet been synthesized into explicit policy. A sense that complaints spike after certain formulation adjustments. A quiet awareness that a supplier category consistently introduces documentation friction. These are emerging signals embedded in data and experience.
AI interacts differently with each layer.
It can retrieve structured system memory with precision. It can surface latent statistical patterns from data. It can approximate relational networks if sufficiently mapped. But embodied experiential knowledge remains difficult to capture unless it is deliberately externalized.
Once we recognize that organizational memory has structure, we can also see that it has failure modes.
Memory decays.
When knowledge resides primarily within individuals, it exits with them. Tenure becomes a proxy for continuity. Attrition becomes structural amnesia. The organization retains artifacts, but loses human or team interpretations of the meaning and significance of those artifacts.
Database memory persists longer. Specifications and audit records outlive people. But persistence is not coherence. Repositories accumulate artifacts without hierarchy. Information remains, but meaning fragments. This is especially true in SaaS contexts where high degrees of configurability complicate the software’s ability to derive meaning across customer sites and contexts.
There is also distortion.
AI systems summarize. They compress. They may privilege patterns that are statistically “loud” over those that are strategically consequential. A near miss that shaped internal behavior may carry less weight in the data than a routine compliance cycle repeated dozens of times. What was emotionally formative for a team may be numerically insignificant for a model.
Summarization introduces bias not through malice, but through abstraction. What is numerically dominant is not always strategically decisive.
And then there is centralization.
Organizations often respond to knowledge challenges by attempting consolidation (e.g., a single system of record, a unified repository, a master knowledge graph). Centralization promises clarity, but, as many can attest, these systems can be exceedingly difficult to maintain over time. Further, distributed memory has advantages. When expertise is spread across people and teams, it introduces friction, but also redundancy and resilience.
In AI-augmented environments, this balance becomes architectural rather than accidental. Total centralization risks brittleness. Total distribution risks inconsistency. Most organizations operate somewhere between the two, often without recognizing that this equilibrium determines their adaptive capacity.
I think these failure modes matter more under periods of acceleration like we’re experiencing now with AI.
In a slow-moving organization, rediscovering lessons is inefficient but survivable. Mistakes repeat at tolerable intervals. Institutional memory can be informal and still functional.
In a high-acceleration environment, poor memory compounds.
As AI compresses production cycles and lowers the cost of experimentation, iteration increases. Workflows recombine. Decisions happen faster. Under these conditions, institutional memory ceases to be archival. It becomes metabolic. Organizations do not metabolize change through structure alone. They metabolize it through memory; what they retain, reinterpret, and carry forward.
If structural molting changes the outer shell of an organization, institutional memory determines what survives the shed. Weak memory turns change into reset. Strong memory makes change cumulative.
AI does not eliminate the need for expertise. It redistributes it. It becomes part of the memory architecture itself — influencing what is recalled, how it is summarized, and which patterns are surfaced.
The leadership question, then, is not whether AI should participate in knowledge workflows. It already does (for better or worse). The question is how memory is designed for agentic application in sustainable ways that allow for continuous evolution based on learned experiences.
What knowledge should be retained for context and training?
How can that knowledge be captured, interpreted, and/or codified?
What should remain distributed?
Where must humans validate interpretation?
How should AI systems be embedded into decision loops?
These are not tooling decisions. They are institutional ones.
If organizational performance once depended, in part, on knowing who to ask, it may now depend on how well routing decisions are designed across humans and systems. In highly automated or co-piloted environments, the system itself increasingly determines when to leverage embedded institutional knowledge and when to surface or escalate to human networks. The challenge is ensuring that when human judgment is engaged, the resulting insight is incorporated back into institutional memory rather than remaining isolated.
That is not simply a retrieval problem.
It is a design problem. And under acceleration, it becomes decisive.
Poorly designed AI layers compress decision cycles but hollow out memory.
Well-designed AI layers compound institutional memory and increase signal quality.
The organizations that win will not be those that automate the fastest, but those that remember best while they accelerate.