
While the AI experts and influencers are falling all over themselves trying to outdo each other with discussions and analysis of new LLM models (DeepSeek, Alibaba, Mistral, OpenAI o3) a rebellion is brewing. From Reddit boards to Twitter and BlueSky threads, the people are fighting back.
Users are sharing elaborate workarounds to disable AI features in the products they have to use every day. Google Gemini - eager to summarise every single email, Apple Intelligence - keen to let me know that a “long” update has arrived (🤷🏽♂️), Copilot in Word - Clipy’s arrogant nephew. Perhaps the most telling one, though, are AI answers in Google search with users explaining how you can use swearing in your search phrase to switch off the feature. The resistance isn't just digital fatigue; it's a visceral reaction to a maximalist view of AI - more AI always equals better.
Yet, as users desperately search for ways to reclaim their pre-AI interfaces, a parallel narrative unfolds. Writers are getting excited about how they can use AI not as a replacement for creativity but as a sophisticated research assistant. Scientists are accelerating drug discovery. Engineers are able to explore thousands of solutions in minutes.
This dichotomy, AI as an unwanted intruder Vs AI as a revolutionary tool, reveals a fundamental misunderstanding by tech companies of how humans want to interact with technology.
The AI maximalists, in their fervor to integrate artificial intelligence into every digital crevice, have forgotten a basic principle: tools should serve their users, not the other way around.
The evidence for this disconnect isn't just anecdotal. Recent research from Irrational Labs reveals a stark truth: explicitly labeling features as "AI-powered" actually decreases user trust and doesn't increase willingness to pay. Users aren't impressed by AI buzzwords; they're looking for concrete benefits and real solutions.
As someone who sells tools to embed AI everywhere, I should be the quintessential AI maximalist. But my position in the industry has taught me something crucial: long-term success in AI integration isn't about forcing technology into every possible corner of user experience. It's about thoughtful implementation that prioritizes user choice and concrete benefits.
The path forward should be clear. Tech companies need to stop treating AI like a marketing checkbox and start treating it like what it is: a powerful tool that users should be able to employ on their own terms. When AI features are optional, transparent, and clearly tied to specific benefits, users embrace them. When they're forced, hidden, and justified with vague promises of "enhancement," users revolt.
The future of AI won't be built by maximalists pushing for integration at any cost. It will be built by companies who understand that the most powerful feature they can offer isn't AI itself - it's the ability to choose when and how to use it. In the end, the difference between AI as an unwanted intruder and AI as a revolutionary tool comes down to one simple principle: respect for user agency.
I have a similar theme for an article rattling around in my head. Great minds.