Field Notes.

The Business Is Not in the Prompt

Why enterprise AI needs a shared operational model

Cover Image for The Business Is Not in the Prompt
Roger Rodriguez
Roger Rodriguez

Listen to this post

AI-generated narration of The Business Is Not in the Prompt.

A couple weeks ago I watched a normal operational conversation turn into a group project in browser tabs.

Slack was moving. Support tickets were open. Someone had a dashboard up. Someone else was checking deployment history. A customer team was trying to understand impact.

The AI assistant in the middle of it sounded confident.

But the humans were still doing the expensive part: stitching together context from five systems and three people's memories before anyone could make a real decision.

That was the part I could not stop noticing: the AI had an answer, but the business context still lived everywhere else.

If you have spent enough time inside a growing company, this feels painfully familiar. The company is not in one place. It is spread across systems, Slack threads, tickets, permissions, meetings, and the brains of people who know which buttons are load-bearing.

Humans are surprisingly good at this. We carry context from a 9:00 AM escalation meeting into the support queue, from the support queue into Slack, from Slack into a dashboard, and from the dashboard into a decision nobody writes down because the next meeting already started.

Then AI showed up, and for a minute it felt like the software interface had changed overnight.

You could ask questions in plain English. Summarize long threads. Generate workflows. Analyze data. Automate tasks that used to require three teams and a calendar invite with the word "sync" in it.

It was legitimately impressive.

It still is.

But the longer I spend with these systems, the more I think the problem is not that AI assistants are bad. The problem is that we keep trying to fix an operating-system problem with chat boxes.

The agent can sound fluent while the business context underneath it is still brittle.

SaaS Fragmented the Map

This is not an indictment of SaaS. The SaaS era solved a huge set of problems.

We moved from spreadsheets, email chains, and manual workflow archaeology into specialized systems for sales, support, marketing, finance, engineering, payments, analytics, and everything else companies do once they grow past five people and a heroic Google Sheet.

That specialization created real leverage. Salesforce got very good at customer management. Zendesk got very good at support workflows. NetSuite got very good at finance controls. Jira got very good at making everyone argue about ticket status with surprising emotional depth.

But each system also built its own map of the business:

  • its own schema
  • its own permissions
  • its own workflow assumptions
  • its own vocabulary
  • its own version of state
  • its own idea of who owns what

That made sense for human-operated software.

Humans can compensate for incomplete maps. We can infer, remember, ask around, and quietly connect five systems before making a decision. We can see a customer name in Slack, remember the renewal risk from last week, know a deployment touched their workspace, and understand why the "simple bug" is actually a revenue problem wearing a fake mustache.

AI systems do not naturally inherit that continuity.

They get a prompt, a tool call, maybe some retrieved documents, and a prayer.

That can work for narrow tasks. It gets shaky when the job depends on the living state of the business.

More Data Is Not More Context

The obvious response is, "Just connect the AI to more systems."

Useful, yes.

Sufficient, no.

Retrieval can tell a system what was written somewhere. It does not automatically tell the system what is true right now, who owns the decision, which workflow is active, what policy applies, what changed yesterday, or whether the same word means different things to Sales, Support, Finance, and Product.

This is the gap I think a lot of AI products run into after the demo.

A document says a customer is strategic. The CRM says the renewal is at risk. Support shows three unresolved tickets. Deployment history shows a recent change. Slack shows an escalation. Finance shows expansion potential. Product knows the account depends on a beta feature that should not exist but very much does.

The useful answer is not sitting in any one of those systems.

It lives in the relationship between them.

That is why "connect all your data" is only half the answer. A pile of ingredients is not dinner. It is just a stressful countertop.

The Missing Layer

The phrase I keep reaching for is a shared operational substrate.

I know, "ontology" is the kind of word that makes a room of normal people suddenly remember they have another meeting.

But the idea underneath it is practical: every company already has a working model of itself. It knows who the customers are, which workflows matter, who can approve what, which systems own which facts, what is currently broken, and what should happen next.

The problem is that this model is mostly implied.

It lives in people, tools, conventions, dashboards, permissions, and workflow residue. Humans reconstruct it every day. AI mostly gets fragments.

What changes if that model becomes explicit?

Not as documentation.

Not as a knowledge base.

Not as a dashboard graveyard with better typography.

As a runtime.

Think of it as three layers: the systems of record where facts originate, the operational model that defines how those facts relate, and the runtime surfaces where people and agents actually work.

The middle layer is the important part. It is the shared map that tells humans and machines what the business currently believes about itself.

Once AI systems operate against that map, they stop behaving like isolated assistants sitting beside the company. They start behaving more like participants inside the operating environment.

That is a much bigger shift than a better autocomplete box.

From Static Screens to Live Workspaces

The part I keep coming back to is what this does to software itself.

Enterprise applications have historically been predefined. Interfaces are manually designed. Workflows are hardcoded upfront. Dashboards are configured around expected questions. Over time, operational software gets rigid because the system has to guess in advance what the business will need later.

That was fine when software was mostly a set of places humans went to do work.

But if the business has a shared live map, software can become less static.

An incident can become a workspace with affected customers, recent deployments, service owners, rollback paths, comms history, and approval rules already present.

A customer escalation can become an adaptive workflow with support context, account history, contract terms, product dependencies, current incidents, and the people who can actually make decisions.

Not just dashboards.

Not just chat.

Software shaped by operational state.

At that point, the AI is no longer simply helping users operate software. It is part of the operating environment alongside humans, workflows, automations, permissions, and systems of record.

That is the part that feels under-discussed.

What Good Looks Like

I do not think the answer is to create an ontology committee and spend twelve quarters debating what counts as an "account." Somewhere a Notion page is already warming up.

The practical version starts where coordination pain is already expensive.

Pick one workflow where the business keeps losing context:

  • customer escalations
  • incident response
  • sales to support handoffs
  • onboarding
  • access approvals
  • revenue-impacting product dependencies

Then model just enough of the working state to make the workflow intelligible.

For an escalation, that might mean:

  • the customer and account owner
  • the active support thread
  • related incidents and deployments
  • contract tier and renewal timing
  • impacted product areas
  • decision owners
  • communication history
  • allowed actions
  • approval requirements

Now the AI is not just summarizing a ticket. It can understand why the ticket matters, what else is connected, who needs to act, and where it is allowed to help.

That last part matters. Agent-ready systems are not just systems with APIs. They have clear ownership, stable permissions, explicit state, understandable failure modes, and actions that can be audited or reversed.

Without that, you do not have an operational agent. You have a confident intern with production credentials.

Agents Need a Business Runtime

The long-term interface for enterprise AI probably will not be just a better chat box.

Chat, copilots, and automation will all matter. But when agents start acting inside a business, they need the same things humans rely on before making decisions: ownership, permissions, current state, and a clear sense of what actions are allowed.

That is the platform layer I think matters: not a smarter assistant floating above the company, but a shared operating model underneath the work.

Because eventually the important question is not:

"Can the AI answer questions?"

It is:

"Can people and agents act from the same understanding of the business?"

That is the shift I cannot stop thinking about.

The companies that solve that will be building more than agents. They will be building the environment agents can actually work inside.