Super-Assistants Vs The World: Why LLM Wrappers, traditional SaaS tools and Specialist AI Agents all need to worry.
The big LLM companies are launching do-everything super assistants. Where does that leave LLM Wrappers, specialist AI Agents and traditional SaaS tools?
The arrival of “super-assistants” such as Claude 4 Opus, ChatGPT and its forthcoming upgrades, or Google’s various instances of Gemini give us a sense of where the big tech companies want to go next. In short, they are after everything. They are following a path defined by the broad technological capabilities of large language models and the ambition to “dominate everything” and are building assistants that can connect into every data source and provide an alternative for every UI interface. Through integration protocols like MCP and A2A and misleading terminology such as the “open agentic web” they want access to your data wherever that may be and they offer adaptable UI interfaces that can morph into anything from coding tools to writing assistants to help you get work done while also chatting.
This means that these super-assistants are coming for everyone else.
If your startup is based on the idea of offering a streamlined interface on top of LLM APIs to achieve a specific task such as creating marketing copy, producing a podcast or performing data analysis you have to start wondering what is your real moat.
As a traditional SaaS company you are caught between a rock and a hard place. On the one hand, the “agentic” web means that people expect you to make your data available through protocols such as MCP. However, the minute you do that you are enabling your users to ignore your painstakingly crafted UI. If you are a CRM tool for smaller organisations and Claude 4 can plug into your data through an MCP server and simply answer any question the user may have - what is your actual purpose? How do you differentiate?
Finally, if you are focusing on building a narrow, deeply-tuned vertical agent for coding, legal drafting, medical triage and other domains you must be looking at these larger models and wondering at what point do they simply become as good or better than your painstakingly fine-tuned model. What other moats are there to protect your investment?
In the next few paragraphs I’ll have a brief look at each category and share some thoughts about what I think is likely to happen (as ever usual disclaimers apply for any adventure in futurology!).
The Threat to the LLM Wrappers
Over the past couple of years there has been a constant barrage of announcements from startups of browser extensions, mobile apps or light SaaS that do one thing well by exploiting LLM capability. That might be writing marketing copy, summarising PDFs, managing jobs applications and so on. The backend is typically a concoction of prompts and other instructions stitched together as quickly as possible so that the startup can get to launch and find its audience.
These companies will feel increasing pressure from multiple directions.
Overtaken by a feature of the bigger platform. If the functionality is important enough the big platforms will simply fold the functionality in. Zoom adds a notetaker, Gmail an email writer, etc. The super assistants can also build the missing UI in a more flexible way. Every time Anthropic adds another type of artifact to Claude a startup faces an existential threat.
Priced out. The wrappers are essentially doing a price arbitrage between the cost of tokens (raw material) and the value of the operation (value to user). This is a fragile position to be in for a fast moving environment, especially when your growth model depends on giving an initial taster for free and hoping to make profit on (at best) 10%-20% of your users.
Users empowered to replicate you out. LLMs with coding assistants are like the Star Trek replicators. Any application you need you can simply create it on the spot. What purpose for thin LLM wrappers when a user can describe what they want and the super assistant can replicate it on the spot. Whats more they can write it to fit exactly what the user needs rather than having to negotiate the competing needs of different types of users.
Here are some examples of the products under threat:
- Compose AI: A chrome extension that helps with writing it features email replies on its home page. At the same time Gmail/Workspace “Help Me Write” sidebar does the same in native UI.
- Fireflies / Otter.ai: These tools to meeting transcription & summary but Zoom, Teams and Google Meet all come wiht the same functionality.
- Humata / ChatPDF: These tools specialise as PDF readers, but you can drop a PDF in any of ChatGPT, Claude or Gemini (or NotebookLM) and get the same job done.
- Copy.ai: They offer marketing copy templates but HubSpot & Mailchimp Copilots can now generate campaign copy inside the CRM/email studio.
Now, this is not going to be an immediate shift. Super-assistants are still brittle and a well crafted series of prompts to achieve a task still has value. But the direction of flight is clear.
The Threat to Traditional SaaS
Classic SaaS vendors with CRM, ERP, ticketing, HRIS tools have invested decades of effort in perfecting pixel-heavy dashboards and per-seat pricing. The rise of super-assistants flips that equation: users talk to an AI, the AI hits the API, and the GUI is left feeling rather lonely in the corner.
As a SaaS tool you will be feeling a number of different pressures.
UI bypass. Super-assistants are inviting everyone to enable access to data via MCP, which means you can be interacting with a SaaS tools data but never access the native SaaS screen. As a SaaS provider where do you do your upsells? Where do you learn what features people want?
Migrating pricing logic. When usage is machine-initiated, “$79 per month per human” looks archaic. There is no human, it’s just a single machine accessing all the data. Multiple humans are using that single machine to interact with the data.
API fragility. Legacy throttles, rate limits and brittle authentication crumble under the rapid-fire calls of autonomous agents, making the SaaS product look unreliable even when the fault lies in its ageing plumbing.
Data-layer commoditisation. If the assistant can stitch together Snowflake, Notion and Stripe to achieve the same workflow, monolithic suites risk being unbundled into cheaper point services behind the scenes?
Examples of SaaS incumbents that must be feeling the pressure are:
Lightweight CRMs like Pipedrive - A tool that logs deals, nudges users to follow up with clients is already feeling the pressure of tools like Google Sheets, Airtable that are able to replicate a lot of the functionality. Add to that the flexibility of LLMs writing queries and achieving more complex data analysis and the risk is very real.
HR software for smaller orgs like CharlieHR - Same situation here, we are essentially dealing with a well-crafted data layer and nice UI. I absolutely appreciate the amount of effort and thought that goes into these tools, but it will become increasingly simpler for organisations to replicate / substitute the functionality.
Expense and receipt management software like Expensify - Log expenses, upload images of receipts and prepare reports. The whole “we can read your receipt” uniqueness of such tools disappears when you have LLMs that can do the same.
Essentially any tool that is a think workflow + data layer, together with a UI that requires a lots of clicking and moving around to update and configure data is at risk. When assistants owns the conversation, the underlying system becomes just another API.
Of course not all SaaS tools are equal. The ones at risk are the lightweight solutions that targeted smaller companies and startups through a nice UI and brand but a think underlying workflow layer. Larger organisations or SaaS tools in regulated spaces are going to enjoy protection for some time yet as they benefit from workflow lock-in and regulatory lock-in.
Nevertheless, I think we will inevitably see a bunch of SaaS tools go through a deceleration in growth and eventually start losing users. The combination of ease of creation of custom throwaway solutions, flexible conversational interfaces and AI Agents that can perform tasks on the fly by combining capabilities as required will mean that less people will be on the lookout for the perfect SaaS solution to their problem. They will just spin it up through a super-assistant as and when required.
The Threat to the Specialised Vertical Agents
After the initial excitement subsided following the release of ChatGPT two years ago the limitations started coming through. Not enough specialised data, failing in the more niche domains, no guardrails, hallucination unless careful guardrails are put int place.
This introduced a fertile space for startups to dig in and capture some value. The pitch is simple: we are safer and more accurate than a generic chatbot. I think this space is also under risk but there are at least two moats that give it a much longer lifetime.
Regulatory moats: while the larger technology providers could do the work to keep up with regulations across multiple geographies this requires deep expertise of a domain and understanding of the internal dynamics. Why spend time on this when there are far more immediate opportunities to go after. For example worrying about how to handle insurance claims, where the tolerance for error is close to zero, is far less attractive than going after the checkout cart of every single consumer facing brand.
Data moats: Niche domains tend to not make their data available publicly. If you have access to a data set that the larger providers cannot get their hands on you are sitting on extremely valuable resource right now. Just in the same way that the Wall Street Journal cut a deal with OpenAI for access to stories your specialised agent can become one of the underlying capability providers for bigger assistants.
Nevertheless, the risks are clear as well.
The larger models can keep ingesting more data and creep on the capabilities of specialised agents. You need to pick your domains carefully. For example, there were a lot of initial efforts in the finance domain but now OpenAI and Gemini have become increasingly better at analysing company balance sheets and end of year reports.
We had a flurry of early startups doing notetaking in specialised domains like medicine but now Epic is building these tools in its own software, while Microsoft, Google and OpenAI are all offering solutions.
Startups in this space need to develop a multi-pronged strategy to succeed. The vertical agents need to be extremely clear about how they offer better safety and guardrails when compared to the super-assistants. The bet is that proven compliance can beat generic convenience.
At the same time, the solution offered needs to integrate deeply into the ecosystem reducing the time to value for a customer. If the functionality is reduced to a single API call that is also easy to switch out. If, instead the functionality is a combination of deep, specialised backend capability coupled with flexible conversational front-end interfaces that customers can drop into their existing systems in the form of co-pilots to enhance functionality the stickiness is greater.
Finally, the company needs to build a broad base of partnerships in the ecosystem of choice, integrating itself deeply with the software that operates in the space such as policy administration systems for insurance or healthcare record management systems and the people and organisations that exist in the space.
What happens next
Super-assistants will not flatten the software landscape instantly, but they will shift its centre of gravity. The transformation won't follow a neat sequential pattern I might have been implying with LLM wrappers falling first, then small SaaS providers and finally vertical agents. We’re probably heading toward a more chaotic, bifurcated landscape where disruption and adaptation happen simultaneously across different market segments.
Fragmentation. Rather than super-assistants cleanly displacing categories of software, we'll see the market split along entirely new fault lines. Enterprise customers, spooked by data governance concerns and burned by AI hallucinations in business-critical processes, may initially retreat toward more controlled, specialized solutions. Meanwhile, SMBs and individual users will rapidly embrace super assistants for their speed and flexibility.
Traditional SaaS companies fight back. The smart ones won't wait to be disrupted, rather they'll cannibalize themselves first. Salesforce, HubSpot, and others are aggressively rebuilding their platforms around AI-first architectures, using their data advantages and customer relationships to create hybrid experiences that combine conversational ease with visual power. The losers will be those who try to protect their existing UI investments rather than reimagining them.
LLM wrappers evolve into "AI-native platforms." Instead of being squeezed out, the most agile wrapper companies will pivot to become the integration layer between super-assistants and legacy systems. They'll survive by solving the "last mile" problems that general-purpose AI can't handle. Weird edge cases, industry-specific workflows, and compliance requirements that emerge when AI meets messy real-world business processes.
Vertical agents become the new middleware. Rather than retreating to narrow niches, successful vertical agents will position themselves as specialized reasoning engines that super-assistants call upon. A legal AI won't compete with Claude for general tasks; instead, Claude will route complex legal queries to specialized agents that provide verified, compliant responses. This creates a new B2B2C model where vertical agents become invisible infrastructure.
The reliability wars intensify. As AI becomes business-critical, a new category of "AI reliability infrastructure" will emerge. Companies will pay premium prices for tools that provide consistency, auditability, and error handling around AI interactions. The winners won't necessarily be the smartest AI—they'll be the most dependable.
Geographic and regulatory balkanization. Different regions will evolve different AI ecosystems based on local regulations, cultural preferences, and data sovereignty requirements. European companies may gravitate toward locally-hosted, transparent AI systems, while others prioritize raw capability regardless of provenance.
The integration complexity explosion. As every software tool races to add AI capabilities, the challenge shifts from "can AI do this task?" to "how do we manage 47 different AI agents that all want to help with the same workflow?" New categories of AI orchestration and conflict resolution tools will emerge.
The timeline is compressed at the edges but extended in the middle. Power users and early adopters will experience dramatic workflow changes in the next 2 years, while large enterprises may take many years to fully transition. This creates sustained opportunities for companies that can bridge both worlds.
Rather than a winner-take-all scenario, we're more likely to see a complex ecosystem where super-assistants serve as the user-facing layer, but rely on a vast network of specialized capabilities, data sources, and compliance tools operating behind the scenes. Along the way there are multiple questions that we need to answer about concentration of power, consequences to society and real life and “collateral” cost, as AI undoubtedly displaces current ways of doing things.