Understanding AI Agents through the Generality - Accuracy - Simplicity (GAS) Framework
Understanding and clearly articulating challenges and trade-offs is key to successful AI Agent deployments. A recent paper provides a great framework through which to express them.
In June 2025, Hasan et al published a really interesting and useful paper called "From Model Design to Organizational Design: Complexity Redistribution and Trade-Offs in Generative AI". In that paper, they present the GAS (Generality - Accuracy - Simplicity) framework as a way to better understand how LLMs are impacting organizations and competitive strategy.
The GAS framework posits that there is an inherent trade-off in model design: "No model can perfectly replicate the phenomena it represents. Instead, effective models must balance competing priorities across three dimensions: generality, accuracy, and simplicity". Optimising one or two dimensions necessitates compromises in the third.
Generality: A model’s ability to perform across diverse contexts, domains, or tasks. For example, a general-purpose LLM can translate policy documents, generate code, summarise wikis without requiring new syntactic knowledge.
Accuracy: This denotes the degree to which outputs align with empirical observations or observable reality. This includes the correctness of information, the reliability of results, and the precision of outputs. For example, a fine-tuned model is sacrificing generality in the pursuit of increased accuracy in a specific domain.
Simplicity: The effort required for users to understand, apply, or interact with the model, and its structural complexity. The typical chat interface to an LLM provides the end-user with a high-degree of simplicity but the application is hiding or abstracting away the underlying complexity required to run and maintain the relevant LLM infrastructure to support inference.
Crucially, organisations "cannot maximize these dimensions simultaneously" and thus model design "involves deliberate trade-offs among them." For example, "gains in accuracy may reduce either generality or simplicity." As the figure below illustrates high-stakes, domain-specific work favors Accuracy over Generality and so on.

Reducing Costs and Redistributing Complexity
Focussing on the simplicity dimension, the authors explain how in reality what we are doing through the introduction of AI and LLMs is not a simple reduction of costs. While we are reducing the cognitive costs of performing tasks for end users (i.e. introducing simplicity on the front-end) we are redistributing complexity into layers downstream. We are increasing governance complexity, infrastructure complexity and so on.
They make reference to Tesler's Law to support their argument. Formulated in the 1980s by computer scientist Larry Tesler, it states that for any system, there is an inherent amount of complexity that cannot be eliminated, only redistributed. The critical question is who must deal with it: the user, or the designers and developers.
Understanding GAS in the context of AI Agents
While the paper focusses on the broad application of LLMs in organizational contexts it is also a useful lens through which to consider the design of a specific AI Agent application.
You can’t have it all
The GAS framework provides the terminology to describe more formally what a lot of engineers and designers understand instinctively. You simply can’t have it all. Every orchestrator that promises to dynamically coordinate between capabilities, every agent framework that claims to handle any business process, every vendor pitch about "unlimited agency" - they're all dancing around the same uncomfortable truth. You can’t have it all.
Perceived vs Actual agency
Another way to apply the GAS framework to refine our AI Agent design is by considering what capabilities we are communicating to the user and in what ways we are constraining the overall system at different layers of the application.
In a previous article on AgentsDecoded we talked about the role and perception of agency across multiple layers of our application. The user interface communicates a certain level agency (or Generality) and ease of access (Simplicity) to our system, the control layer is where we can manage agency (typically in pursuit of Accuracy) and also introduce complexity, while the LLM is whether there is a less controllable emergent agency.
The goal is to align the different layers. If your user interface promises the moon (high Generality and Simplicity) but your control layer can't deliver the precision (Accuracy) without an army of engineers maintaining it (complexity redistribution), you are heading for trouble.
So What Should You Actually Do?
For a successful AI Agent deployment you need to pick a clear spot on the GAS triangle and own it.
High Accuracy, Low Generality: Build domain-specific agents that focus on delivering tightly defined use cases reliably. You will need different agents for different processes but that is an acceptable cost of complexity to pay.
High Generality, Lower Accuracy: Accept that your general-purpose agent will make mistakes. Build in human oversight. Make peace with the 80% solution.
High Simplicity: Keep the front-end simple but be prepared to pay for it with backend complexity. Budget for it. Staff for it. Plan for it.
The organisations that will win with AI Agents are those that understand these trade-offs and design around them, not those that pretend they don't exist.
There is no free lunch
Every "breakthrough" in agent capabilities is really just shifting complexity around. That new orchestration framework that promises to solve all your problems? It's moving the complexity from your business logic into your infrastructure. That retrieval-augmented agent that never hallucinates? Congratulations, you've just signed up to maintain a knowledge base that needs constant updates.
The GAS framework isn't telling us something we don't already instinctively know. It's giving us the language to articulate why that demo that looked so good is now causing so many headaches in production.
Ultimately, the future of AI Agents isn't about transcending the GAS trade-offs. It's about getting increasingly better at managing them.