Case Study - Knowledge Work As Code

Building a multi-agent AI system that systematizes knowledge work. From scattered information to queryable patterns across 39 relationships and 27 active projects.

Organization
Personal/Internal Work
Year
Focus Areas
AI Systems, Data Engineering, Workflow Automation

The Situation

Twenty-seven active projects. Thirty-nine client relationships. Information scattered across meeting transcripts, CRM fields, my brain, hastily written notes, email chains, Slack threads.

Every morning started the same way: twenty minutes reconstructing context before a call or a meeting. What did we discuss last time? What stage is this project in? What are their technical requirements? Who's the key stakeholder again? What questions did they ask that I never answered?

The information existed. Somewhere. Finding it was the problem.

This is the reality of white-collar knowledge work. You're managing relationships, tracking projects, maintaining long-term context across dozens of active engagements. You need to know everything about everyone, all the time, while also staying current on your domain, competitive landscape, and emerging patterns in your work.

Most knowledge workers solve this with tribal knowledge and hope. Some use CRM religiously. Others build elaborate note-taking systems. All of us context-switch until our brains hurt.

The fundamental issue isn't lack of information. It's that knowledge work isn't systematized.

Smart person does smart thing. Over and over. That's the model. And it doesn't scale. Not to 27 active projects. Not to a team. Not to the compounding complexity of relationship-driven work.

What if knowledge work could be code?

The Concept: Knowledge Work As Code

Here's what I mean by "knowledge work as code":

Traditional knowledge work is stored in human brains, supplemented by notes, documents, and spreadsheets. When you need to know something, you remember it, search for it, or ask someone who knows. The knowledge is implicit, the process invisible, the patterns undocumented.

Knowledge work as code means:

  • The database is the source of truth, not memory
  • Workflows are documented and constantly evolving, not static tribal knowledge
  • Patterns are queryable, not anecdotal
  • The system gets smarter over time, and the human far more accurate and impactful

This isn't about replacing the human. It's about encoding decision-making patterns so you focus on high-value work only humans can do. Building infrastructure for knowledge work the same way you'd build infrastructure for any other critical function.

Most people think AI is the solution. It's not. AI is the interface. The solution is the system underneath.

The System Architecture

I built this in five layers. Each layer serves a specific purpose. Together they create a system that turns chaos into capability.

Layer 1: Database As Truth

DuckDB as the foundation. Not a CRM. Not a notes app. A real database.

Why DuckDB? It's local, fast, designed for analytical queries. I can version control the schema. SQL is the right interface for structured knowledge work. And it has vector search built-in.

Here's what I mean by truth. Before a Monday morning call with a client, I run this query:

SELECT r.name, r.context, r.strategic_goals,
       COUNT(DISTINCT p.id) as active_projects,
       COUNT(DISTINCT i.id) as total_interactions,
       MAX(i.interaction_date) as last_contact,
       jsonb_agg(DISTINCT jsonb_build_object(
         'question', i.questions_asked,
         'date', i.interaction_date
       )) as recent_questions
FROM relationships r
LEFT JOIN projects p ON r.id = p.relationship_id
LEFT JOIN interactions i ON r.id = i.relationship_id
WHERE r.name = 'Acme Corp'
  AND p.stage != 'completed'
GROUP BY r.id;

Twenty seconds. Complete context. Every conversation we've had. Every question they asked. Every project in flight. Not scattered across notes and memory. Right there. Queryable. Accurate.

That's what I mean by truth. The database knows everything. I just ask it questions.

Layer 2: Integration Points

The database is only as good as what flows into it. Three integration patterns keep it fed.

Every client interaction gets transcribed. I built a pipeline that extracts key points, questions asked, concerns raised, next steps committed. Structured data from unstructured conversations flows directly into the database. Now searchable, queryable, analyzable. The meeting happens, the transcript processes, the database updates. No manual logging.

That feeds the dashboard layer. A clean interface for the messy reality. Project health, upcoming meetings, engagement status, action items. The AI agents write the reports. I read them over coffee.

And when a client asks about something specific, vector search over the knowledge base finds the exact context in seconds. I indexed relevant documentation with embeddings. No more searching through folders and hoping I named the file something findable.

Integration isn't about connecting everything. It's about making the right information available at the right time in the right format. These three patterns handle that.

Layer 3: Workflow Documentation

Here's the invisible problem: knowledge work is mostly undocumented workflow.

You know how to prep for a client meeting. You know how to analyze relationship health. You know how to track project status across your portfolio. But that knowledge is implicit. You can't hand it to someone else. You can't systematize it. You can't make it machine-readable.

Take relationship health analysis. I used to just know when something felt off. A client going quiet. Responses getting shorter. Meetings rescheduling. That intuition came from pattern recognition built over years. But I couldn't explain the patterns. I just felt them.

Now it's documented. What indicates a relationship is thriving versus at risk. Early warning signs. When to intervene. Days since last contact, response time patterns, engagement depth trends, question complexity over time. The scoring patterns that matter. All written down as a machine-readable workflow.

Or portfolio health reporting. The categorization system. Urgent, needs attention, on track. How to triage across dozens of relationships. Where to focus energy. What signals trigger which categories. The logic I used to run in my head every Monday morning, now executable by the system.

I documented six core workflows like these: creating engagement records, updating after interactions, analyzing relationship health, reporting portfolio status, generating prep materials, and searching the knowledge base. Each one captures the implicit patterns of how I actually work.

These aren't instruction manuals. They're registered MCP resources. Machine-readable workflows that AI agents can discover, understand, and execute.

That's the shift. From "here's how I do my job" to "here's a system that knows how to do the job."

Layer 4: MCP Tools - The API Layer

MCP (Model Context Protocol) is Anthropic's standard for connecting AI agents to tools and data.

Most people build MCP servers with prescriptive CRUD operations:

  • get_relationship()
  • update_project_stage()
  • log_interaction()
  • find_projects()

Thirty tools, each doing one specific thing. That was my first version. 432 lines of code. And it was wrong.

The problem: you're constraining the AI to your imagination. You're saying "here are the 30 things I think you might need to do." But you don't know what questions you'll ask next week. You don't know what patterns will emerge. You're building a straightjacket.

I deleted it. All of it.

New approach: expose SQL directly.

Three tools:

  • execute_sql() - Run any SQL query
  • get_table_schema() - Explore the database structure
  • search_documentation() - Vector search over workflows

That's it. From 432 lines to 117. From 30 constrained operations to infinite flexibility.

The AI agent doesn't call get_relationship(). It writes SQL:

SELECT r.name, r.context, r.strategic_goals,
       COUNT(i.id) as total_interactions,
       MAX(i.interaction_date) as last_contact
FROM relationships r
LEFT JOIN interactions i ON r.id = i.relationship_id
WHERE r.name LIKE '%Company%'
GROUP BY r.id

I'm letting the LLM do what it's great at: writing code. But here's the critical piece: the MCP server acts as a secure boundary layer. Agents never touch credentials, never see internal architecture, never get direct database access. They write queries against a well-documented system while the server handles all the sensitive bits. This is how you give AI powerful, flexible access to your data without giving away the keys to the kingdom. Direct SQL access beats rigid CRUD operations, and the agents get better over time as the schema evolves. All while your security perimeter stays locked down.

The workflows from Layer 3 become discoverable here. When an agent connects to the MCP server, it sees registered resources: workflow://creating-engagement-records, workflow://relationship-health-analysis, workflow://portfolio-health-reporting. The agent can read them, understand them, execute them. No hardcoded instructions. No brittle "if this, then that" logic. Just discoverable, machine-readable workflows.

This is the key architectural decision: Don't limit the AI to your predefined operations. Give it the primitives and let it compose solutions.

Layer 5: Agent System

Not one AI. Multiple specialized agents, orchestrated by a conductor.

Three agents, each with a specific domain:

Customer Agent: Knows everything about relationship engagement. Writes SQL against the database. Analyzes patterns. Generates portfolio reports. Fetches context before meetings.

Writer Agent: Handles all communication. Emails, proposals, follow-ups, documentation. Writes in my voice because it's been trained on my style. No corporate buzzwords. Just clear, direct communication.

Engineer Agent: Technical review, code feedback, architecture decisions. The brutal honesty filter before something ships.

Why multiple agents? Because general-purpose AI is like a general-purpose employee. Jack of all trades, master of none. Specialization matters.

The pattern: Customer Agent fetches context → Writer Agent drafts email → Engineer Agent reviews technical claims → Final output to user.

Sequential handoffs for complex workflows. Parallel execution for independent tasks. The Conductor decides.

That's the compound effect. The system gets smarter as you document more. The AI improves as patterns emerge. Knowledge accumulates systematically, not just in your head.

The Compound Effect

The real value isn't any single feature. It's how the pieces compound over time.

Monday morning. "Generate portfolio health report." Instant categorization: urgent (3), needs attention (7), on track (17). I know exactly where to focus. No manual review. No guessing at priorities.

Before a client meeting. "Prep me for the Acme meeting." Two minutes. Complete history, open questions, context, stakeholder map. Everything I need, nothing I don't.

After a client meeting. "Analyze this transcript and update the database." Structured capture. Nothing lost. No hastily written notes that I'll never find again.

Context switching between projects. "What did we discuss about X?" Immediate answer. No ten minutes trying to remember where I left off.

90% reduction in prep time. 100% capture of engagement history.

And it compounds. Every interaction feeds the system. Every question asked gets logged. Every concern raised captured. Every solution proposed documented.

Six months from now, I'll query: "What questions do enterprise clients ask during security discussions?" or "What patterns differentiate projects that complete quickly versus those that stall?" The database becomes institutional memory. Patterns emerge from data, not anecdotal evidence.

This is the blueprint for onboarding the next person. They get: complete relationship database with full history, documented workflows for every core activity, AI agents that know how to execute those workflows, patterns and insights from every engagement before them. Instead of six months of tribal knowledge transfer, they're productive in days.

That's knowledge work as code. The human gets better, but the system also gets better. And the system transfers.


The Bigger Picture

We're at the beginning of a shift. Knowledge workers can now build systems that encode their expertise. Not to replace themselves, but to amplify themselves.

This example is relationship-focused work. But tomorrow?

  • Consulting as code - Document methodology, ship systems to clients
  • Research as code - Literature review becomes queryable knowledge base
  • Project management as code - Workflows that adapt to team patterns
  • Legal work as code - Case research, document analysis, precedent search
  • Medical records as code - Patient history, treatment patterns, diagnostic support

The question isn't whether AI will change knowledge work. It's whether you'll systematize your knowledge work before someone else does.

Start small. Build your database. Document one workflow. Add one agent. Measure what changes.

The system compounds from there.


This case study documents personal/internal work - a system built to systematize knowledge work for managing client relationships and active projects. This is not affiliated with or representative of any specific company. All technical details are accurate as of November 2025.

More case studies

Crypto Strategy Platform for Investment Firm

Transforming a visionary crypto trading concept into a comprehensive product specification for a specialized investment firm expanding into digital assets.

Read more

Automated Membership Onboarding Platform for a Youth Leadership Organization

A comprehensive automation platform designed to streamline chapter registration and member onboarding for a Washington, DC-based youth leadership nonprofit.

Read more

Located Near

  • Washington
    District of Columbia
    United States