Product Thinking

The Death of Buttons: How AI is Killing Traditional UI

The 'Agentic Experience' is more than just adding AI to old interfaces. It's about designing a new way of interacting—natural, dynamic, and deeply collaborative.

By Ran Shemtov
#AI#UX#Agentic Interfaces#Product Design

The Death of Buttons:
How AI is Killing Traditional UI

A thumbnail of me

Ran Shem Tov

2025-06-24

The Death of Buttons: How AI is Killing Traditional UI

After shipping dozens of AI features at CopilotKit and talking to countless clients, friends, and fellow builders, I've noticed something: we're at the very beginning of understanding a fundamental shift happening right under our noses. As I watch teams build their way toward production, it's become clear that most people don't yet grasp what I've started calling the "Agentic Experience." Teams are building impressive AI features, but they're still thinking in terms of "software + AI chat bubble" rather than something entirely different. What I call an agentic experience doesn't feel agentic at all. It feels natural, like having a capable partner who understands what you need and helps you get there efficiently. But we're not quite there yet as an industry, and most teams don't even know this destination exists.

The Autonomous Car Moment

We're at an inflection point similar to the early days of autonomous vehicles. Think about it: when you take a ride with Waymo today, it's just a normal car. A person could drive it if it wasn't set up for autonomous operation. But in a truly autonomous world, cars won't look like that at all. The whole interior setup will change - the way we think about vehicles that transport people will be fundamentally different. Seats won't all face forward. The steering wheel will disappear. The experience will revolve around comfort, maybe people sleeping during their commute, maybe entirely new forms of interaction. That's exactly the mindset shift we need for agentic experiences. We're currently building the equivalent of horse-drawn carriages with engines: keeping all the old assumptions about how interfaces should work, just adding AI as a new feature.

The Problem with "AI-Enhanced" Thinking

The most obvious evidence of this old thinking is the UI/UX we all know and love: endless buttons, modals, multi-step flows. Let me give you a concrete example. Consider an online shoe store. In the traditional approach, you browse the catalog, select a product, choose the color with a button, find your size with a picker, navigate through checkout flows, and finally receive an order confirmation. Sure, this site might have AI - there's probably a chat bubble in the corner where you can ask about delivery or search for specific products. But fundamentally, you're still clicking through the same rigid interface patterns we've used for decades. When someone abandons their cart, your analytics might show they made it to step 3 of checkout before leaving. But you don't know why. Was the shipping cost too high? Did they get confused by the interface? Were they comparison shopping? You're left guessing from anonymous click data. Now imagine an AI-native shoe store: You're greeted with a prominent interface saying "Tell me what you're after, let's help you find the perfect shoes." You describe what you want or give some general direction. A generative UI appears showing a curated subset of the catalog, already filtered by your size and preferences. Maybe it says "Unfortunately, we have about 5 more models where your size/color combination is sold out. I can still show them to you if you'd like." You make your selection without button hunting. The AI guides you through a contextual checkout flow and you're done. It feels tailored - like walking into a physical shoe store where an experienced salesperson gets to know you, shows you what's available, and walks with you to the register. If you're a returning customer, the experience gets even better. Log in a year later and the agent might say "Are we going for something like the Air Jordans again or something else?" and "The size was 11, right?" But here's what's equally transformative: when someone leaves frustrated, they can explain exactly what they expected to find but couldn't, or why the suggested options didn't match their needs. Instead of anonymous cart abandonment data, you get explicit feedback about unmet expectations. You know exactly when they churned, why they churned, and how they felt about it. That's infinitely more valuable than tracking generic button clicks.

LLM-Powered vs. Agentic: Understanding the Difference

Here's where most teams get confused. Simply using an LLM is a one-off interaction: input goes in, output comes out, maybe a tool gets used. Done. An agentic experience is fundamentally different. There's a graph - a coordinated flow where multiple specialized AI agents work together, each expert in one piece of the overall experience. But there's also a philosophical difference. The misconceptions I see most often revolve around thinking AI should be an "ask it to do something and get it fully done" tool, rather than a collaborative copilot. People approach AI like a vending machine: insert request, receive output. But the best agentic experiences are conversational and iterative. Consider how people actually work with effective AI systems. They don't just fire off a single command and expect perfection. They provide context, clarify requirements, give feedback on initial attempts, and refine the approach together. The most successful AI implementations I've seen involve this back-and-forth collaboration rather than one-shot transactions, and the best generated answers came from sessions where context was built gradually, not from isolated prompts. Let's explore this with a trip planning example. You say "Just booked a flight to Seattle, let's plan a trip." With a regular LLM (like ChatGPT), you'll get a very long text response with some links, and you'll have a back-and-forth conversation. With an agentic experience, it starts exploring your destination. Instead of a tedious ping-pong of questions like "Do you like hiking and museums?", the agent dispatches a human-in-the-loop question rendered as clickable chips on the frontend. A few clicks and the agent continues its work. Here's the key insight: you're using UI to communicate with the agent without even knowing it. You won't get that with ChatGPT. The system then activates three specialized agents: an Activities agent responsible for things to see and do, a Restaurants agent, and an Accommodation agent. These agents run tools and write to a shared agentic state (essentially a live collection of everything the agents have discovered), and this state is reflected directly on your screen. Just like ChatGPT shows "thinking" text, you might see "I'm considering options here…" while a map opens up and markers start appearing. You see the progress! The agents finish their work, and you've got a few restaurant options, ranked by your likely preferences, clearly indicated by marker colors. You can click each marker to dive deeper, ask questions, or steer the direction your new itinerary is taking.

The UI as Agent Canvas

This reveals the core principle that makes something feel "agentic" rather than just "LLM-powered": the UI itself becomes part of the agent's communication toolkit. The agent doesn't just respond with text. It manipulates the interface to show its thinking and progress, just like a car salesperson uses words alongside the actual car you can see, feel, and interact with. Compare these interaction patterns:

Traditional Software Thinking:

  • Static buttons that always perform the same action
  • Multiple steps and flows to accomplish simple tasks
  • Users must learn and navigate your interface logic

Plain LLM Thinking:

  • Lots of text, minimal visual representation
  • Everything happens inside a chat window
  • One-off question-and-answer interactions

###Agentic Experience:

  • The entire view changes, adapts, and responds as interaction progresses
  • Visual representation appears contextually (maps for trip planning, product grids for shopping)
  • Ongoing collaboration rather than discrete transactions

The Architecture Behind the Magic

From a technical perspective, this requires moving beyond simple LLM calls to agent graphs with real-time state synchronization. The agent's reasoning and progress need to be continuously reflected in the UI, not just at the end of processing. But here's what's encouraging for teams worried about starting over: you don't need to rebuild everything from scratch. Your existing data architecture can stay largely intact. The same hotel database that powers your traditional booking site becomes infinitely more useful when accessed through intelligent conversation rather than endless filter combinations. If you're transforming existing software, you can often keep your data shape, databases, and backend APIs exactly as they are. The only thing changing is how you serve this data to users. Instead of making them navigate complex filter systems over multiple sessions, a brief conversation can surface exactly what they need.

Product and Design's Time To Shine

This transformation isn't primarily a technical challenge - it's a product and design challenge. We've spent (and still spending) time as an industry testing whether AI can do math, write code, or count letters in "strawberry." Now we need product managers and UX designers to join the exploration. The age of agentic experiences demands that we think carefully about choreography: when to show progress, how to handle multiple simultaneous updates, what level of AI reasoning to expose versus hide. These are fundamentally product and design decisions.

Getting Started: A Call to Action

For anyone ready to explore this space, here's where to begin: Identify your friction points: Where do users struggle most in your current flows? What takes them multiple sessions to accomplish? Where do you see high drop-off rates despite having all the necessary data? Understand what's possible: Explore existing agentic experiences to spark your imagination. See how other teams are solving similar challenges with AI-native approaches rather than AI-enhanced traditional flows. Prototype new interaction patterns: Use your existing design tools to sketch out these new experiences. What would it look like if users could accomplish their goals through conversation rather than navigation? How would you visualize agent progress and state changes?

The exciting part is that prototyping agentic experiences uses familiar tools and methods to curate something fundamentally different. When your team understands what's possible, you can use your preferred tools to sketch interfaces that include chat, generative UI, contextual state changes and how to display all the new, dynamic data. Need a hand? I'm always happy to discuss these concepts with teams exploring this space.

The Competitive Reality

The world of AI is evolving rapidly. Engineers who don't use AI coding tools may be considered slower and less efficient than their AI savvy colleagues. Companies that don't explore these new interaction paradigms risk being seen as old-fashioned and outdated. Just like how clean, minimal design became the standard - replacing the gradient-heavy, skeuomorphic interfaces of the early 2010s - AI-powered experiences will become the baseline user expectation. The companies that start building this expertise now won't have to scramble and iterate when the market demands it from everyone. You can either get a seat at the table now, or fight for scraps later.


Ran Shemtov is an AI Product Builder & Developer Advocate at CopilotKit. He helps define the future of LLM-powered applications and specializes in turning abstract AI concepts into production-ready tools and experiences.

Share this post: