CIAO Design Principle #2: Explicit Rules in Code Beat Implicit Rules in Prompts
CIAO (Conversations-Interfaces-Agents-Orchestration) is an AI application framework that is not afraid to clearly say that explicit code is better than prompts.
In the previous post we introduced CIAO - an architectural framework for AI applications and our first design principle: “Focus first on the what of AI applications, the fundamental capabilities and interactions required, rather than the how, the specific technologies and implementations used to realise them”. Before diving deeper into CIAO, I’ll discuss the second design principle, one that may ruffle some feathers.
Explicit rules defined in code are better than implicit rules defined in prompts.
One of the first victims of applications designed using LLMs (and non-deterministic AI systems in general) are explicit definitions that can programmatically be reasoned about.
Remember clear, unambiguous statements like status = "active"
or rules like if temperature >= 20 then disable_heating()
? These aren't just code—they're beautiful things. They're efficient. They provide full control.
If you can solve a problem using just rules and explicit knowledge statements, you should be proud of yourself.
But somewhere along the way, "rule-based" became a bit of a dirty word. Rule-based is old and unfashionable. Everything now needs to be defined in prompts, as if deterministic code is somehow inferior to natural language instructions.
Here's the thing: prompts are rules too. The only difference is that you cannot explicitly control them.
The Hidden Trade-off
Every time we replace a deterministic rule with a prompt to an LLM, we're making a trade. We gain flexibility, natural language understanding, and the ability to handle ambiguity. But we lose something precious: certainty.
This certainty isn't just a technical nicety - it's the foundation of reliable systems. When a traditional rule executes, you know exactly why a decision was made. You can trace it. You can fix it when it breaks. You can guarantee it will behave the same way tomorrow.
With LLM-based systems, these guarantees evaporate. The same prompt can yield different results depending on the system's mood (or more technically, the non-deterministic nature of the generation process).
The False Dichotomy
The industry has constructed this false choice between "outdated" rule-based approaches and "innovative" AI solutions. This framing ignores a fundamental truth: we need both.
Some problems demand the precision and reliability of explicit rules. Others benefit from the flexibility and contextual understanding of LLMs. The art lies in knowing which is which.
Being Explicit About What's Implicit
For AI applications, we need both explicit and implicit reasoning, and most importantly, we need to be explicit about what is implicit. We need to be clear about what we don't fully control so that we can ask ourselves the question about whether we need to control it and how we might go about doing that.
This means documenting where non-determinism exists in our systems. It means setting clear boundaries around where LLMs or other AI technologies make decisions in non-deterministic ways versus where traditional logic prevails. It means building guardrails that constrain the space of possible outputs without eliminating the benefits of flexibility.
At a cultural level it means learning to love explicit rules once more. If you can solve your problem with good-old fashioned programming you should absolutely do it. Have the LLM write the code for you if you want to feel better about it ;-)
The Path Forward: Thoughtful Hybridization
The future doesn't belong to pure LLM applications or pure rule-based systems. It belongs to thoughtful hybrids that leverage the strengths of each approach:
Use explicit rules for critical business logic, especially where legal, ethical, or safety considerations are paramount.
Deploy LLMs for understanding context, handling natural language, and managing ambiguity.
Create transparent interfaces between deterministic and non-deterministic components.
Build tools that help us understand and constrain behaviors.
Questions We Should Be Asking
For every component in our AI systems, we should be asking:
Do we need deterministic behavior here?
What's the cost of unpredictability in this context?
How will we debug issues when they inevitably arise?
Can we provide guarantees about system behavior?
The Conscious Choice
As builders, we have a responsibility to make conscious choices about where we embrace the power of ambiguity and where we maintain the clarity of explicit definitions.
When we do this well, we get the best of both worlds: systems that can understand and adapt to the messy reality of human needs while maintaining the reliability that makes software trustworthy.
So before you rush to replace your rule-based system with an LLM prompt, ask yourself: am I gaining more than I'm giving up? Sometimes the answer will be yes. But sometimes, those explicit definitions might be worth keeping after all.
As we evolve CIAO we will constantly be on the lookout to make sure there is space to ask these questions.
To follow along with the work on CIAO please subscribe: