You're Building AI Agents Backwards
Why agents as abstractions matter more than agents as technologies
Last Tuesday, I sat in a meeting that was supposed to be about automating customer onboarding. Fifteen minutes in, we were deep in a heated debate about whether to use GPT-4 or Claude, which vector database had the best recall rates, and what was the best framework.
The actual problem: “customers were waiting three days for account activation” somehow was forgotten. We'd lost the forest for the trees, or more accurately, lost the customer for the embeddings.
This scene plays out in conference rooms (real or virtual) everywhere, and it succinctly captures one of the biggest misunderstandings around where is the value of an agent-based approach.
We tend to think of AI Agents as a specific technology. We then realise that it is actually not clear what this technology is. At least with LLMs we know whether we are using one or not. With agents when is it that we are “doing it right”? As a result we worry whether we are building “real” agents or are just writing software. How do we convince ourselves that the agents are “real”? Well, we end up making the technology more complex than it needs to be. We over engineer in a quest to be credible, not because the problem requires it.
The way out of it? Think of agents are an abstraction in service of a strategy first, and a technology second. This isn't just semantic gymnastics. It's the key to cutting through the noise and actually delivering value.
The Power of the Right Abstraction
Abstractions are incredibly powerful tools and we use them extensively in everything we do. Think about it this way. When object-oriented programming emerged, its power wasn't in any specific language feature. It was in providing a new way to think about organizing code – encapsulating data and behavior together. The abstraction came first; the implementations followed.
AI Agents offer the same conceptual breakthrough for automation and decision-making. They give us a framework for thinking about:
Encapsulation: We can package complex reasoning and decision-making into discrete entities, hiding the complexity while exposing the actions. Unlike traditional objects that encapsulate data and methods, agents encapsulate goals and capabilities.
Goal-Directed Behavior: Instead of programming specific behaviors, we define desired outcomes. "Keep customer satisfaction above 90%" rather than "If customer says X, respond with Y."
Environmental Awareness: Agents sense and act within their environment, whether that's reading emails, querying databases, or interacting with other systems.
Autonomy Spectrum: We can dial autonomy up or down based on the problem. Some agents need tight guardrails; others can be given broad discretion.
This abstraction is powerful because it lets us design systems that mirror how we naturally think about delegation. When you delegate work to a human colleague, you don't specify every neuron firing – you describe goals, constraints, and available resources. Agent abstractions let us do the same with machines.
Why This Matters Now More Than Ever
The current AI landscape is drowning in technical specifications. Every vendor wants to tell you about their proprietary architecture, their unique implementation, their special certification system. They're asking you to care about whether their system qualifies as a "true" agent according to some arbitrary technical checklist.
This is backwards. The question isn't whether your system uses an LLM to control workflow iteration or meets some three-star rating system. The question is: What work can you delegate to it? What problems does it solve? How does it transform your operations?
When you start with agents as abstractions rather than implementations, everything becomes clearer:
Technology becomes flexible: Maybe you use an LLM for natural language understanding but deterministic rules for critical decisions. The agent abstraction accommodates both.
Evolution becomes natural: Start simple, add sophistication as needed. The conceptual model remains stable even as capabilities grow.
Communication becomes universal: "We need an agent to handle tier-1 support tickets" makes sense to everyone. Technical implementation details don't.
From Abstraction to Action
So how do you operationalize this? Start with these questions:
What work and decision-making do you want to delegate? It could be specific task or it could be broader responsibility areas.
What does success look like? Define goals in terms of outcomes, not processes. We will then iteratively craft the technology to become increasingly better at getting to the right outcomes.
What capabilities are needed? What must the agent sense, decide, and act upon? These start forming your technical requirements.
How much autonomy is appropriate? This determines your guardrails, and specific architecture choices once you turn your attention to implementation.
Only after answering these should you even consider implementation specifics. Will you use the latest LLM? Perhaps. Will you need complex reasoning chains? Maybe. But these are tactical decisions that flow from your strategic framework, not the other way around.
That's the power of thinking about agents as abstractions. You're freed from dogma to focus on outcomes.
Looking Forward
Organizations that understand agents as a conceptual framework for automation will pull ahead. Those waiting for the "perfect" technical definition will be left behind.
The companies that win won't have the most technically sophisticated agents. They'll have the clearest thinking about what work to delegate and how to structure that delegation. They'll use agents as a lens for reimagining work, not as a checkbox for technical compliance.
So the next time someone asks whether your system is "really" an agent, try this response: "It autonomously pursues goals using its capabilities to solve real problems. That's all that matters."
Then get back to solving your problem.