Post-Interface Design

On the abolition of interface in the age of autonomous systems

Andrea Bergonzi ・ FEB 2026 ・ Paris

Read online

Download PDF

Introduction

For thirty years, we’ve known where interfaces are headed: they disappear.

Mark Weiser said it in 1991. Don Norman wrote a whole book about it in 1998. The design community has been talking about “zero UI” and “ambient intelligence” for decades.

The theory is solved. The practice is not.

We have voice assistants that misunderstand, smart homes that need apps, AI that requires prompting. The gap between vision and reality remains enormous: not because we lack ideas, but because we lack the courage and craft to make those ideas real.

This manifesto synthesizes thirty years of research with today’s AI capabilities to propose seven principles for post-interface design: systems that understand rather than wait to be told, that act in silence rather than demand attention, that adapt to humans rather than forcing humans to adapt to them.

The technology is finally ready. The work begins now.

Historical context

The destination has been visible for decades.

Mark Weiser (1991) described ubiquitous computing: technology so embedded it becomes invisible. He imagined a world where computers fade into the background of our lives, becoming as unremarkable as electricity or running water.

Don Norman (1998) wrote The Invisible Computer, arguing that the best technology is the one you don’t think about. He predicted that as machines became more intelligent, explicit interfaces would become unnecessary: even counterproductive.

Amber Case (2015) gave us Calm Technology: systems that respect our attention.

Golden Krishna (2015) challenged us to stop slapping screens on everything.

The 2000s-2010s brought waves of research: calm technology, ambient intelligence, anticipatory design, zero UI. Academic papers, conference talks, design manifestos. All pointing to the same endpoint.

The problem? We couldn’t build it.

The technology wasn’t there. AI couldn’t understand the context. Sensors were expensive. Machine learning was brittle. Voice recognition failed constantly. We had the vision but not the capability.

So we kept optimizing the interface paradigm instead.

We made GUIs more intuitive. We invented touch screens. We added voice commands. We built chat interfaces. Each generation is celebrated as revolutionary, each still fundamentally requiring explicit instruction from users.

But something changed in the 2020s.

AI systems can now understand context from multiple signals: location, time, behavior patterns, conversation history, even visual input. They can plan multi-step workflows. They can learn preferences without explicit configuration.

They can act autonomously with reasonable reliability.

The technical foundation that Weiser and Norman imagined finally exists.

What’s missing isn’t technology anymore. It’s design frameworks for systems that work invisibly.

The failure of interface paradigms

Every interface paradigm has been a compromise. A necessary evil while we waited for machines to get smarter.

Command line (1960s-1980s)

Complete cognitive burden on the user. You had to know the syntax, memorize commands, understand the file system structure. Powerful for experts, unusable for everyone else.

Why it failed: Required users to think like computers.

Graphical User Interface (1980s-present)

Reduced cognitive burden through visual metaphors. You could explore, discover, point and click. Revolutionary in 1984. Still the dominant paradigm in 2026.

Why it’s failing: Screens constrain thought.

They force us to flatten intentions into two-dimensional space, to serialize what might be simultaneous, to make visible what could remain ambient. Every interface is a bottleneck where human need must squeeze through pixels and gestures.

The average person checks their phone 144 times per day. Most interactions under 30 seconds, most requiring visual focus that pulls them from presence into performance.

Voice Interfaces (2010s-present)

Natural language input seemed like the answer. Just speak to your devices like you speak to people.

Why it’s failing: Conversation is still performance.

It requires formulation (converting vague wants into language), articulation (expressing clearly enough for parsing), verification (confirming understanding), and iteration (refining when it failed).

For complex requests, this works. For routine tasks, it’s friction. I shouldn’t need to ask my calendar to schedule my weekly team meeting: it should recognize the pattern. I shouldn’t need to request a ride home after evening class: the system should know my routine.

Chat Interfaces (2020s-present)

LLMs made conversational AI feel magical. ChatGPT, Claude, Gemini: everyone celebrated them as revolutionary.

Why they’re failing: Chat is just another form of explicit instruction.

I’m still negotiating with the machine, still translating my intentions into its language, still doing the work of communication.

True intelligence shouldn’t require me to speak. It should require me to exist, to move through the world with my needs, patterns, and context.

The pattern

Each paradigm reduces cognitive burden slightly. Each is celebrated as a breakthrough. Each still fundamentally requires users to explicitly tell machines what to do.

The interface has softened. It hasn’t disappeared.

Transitional phase: Chat and Agentic systems

We’re living through a specific moment right now: the transition from assistive AI to autonomous AI.

OpenClaw launched in late 2025 and hit 145,000 GitHub stars in ten weeks and was rapidly acquired by OpenAI in early 2026. It’s an AI assistant that runs on your computer, accesses your apps, and executes tasks autonomously through messaging platforms like WhatsApp and Telegram.

You message it: “Schedule coffee with Sarah next week". It checks both calendars, finds mutual availability, sends coordination emails, books the time, sets reminders. You never open a calendar app.

This is progress. It proves autonomous agents can work.

But it’s not the endpoint.

You’re still messaging the system. You’re still looking at screens. You’re still instructing, even if the instructions are high-level.

It’s better than clicking through menus, but it’s not silence. It’s not invisible.

OpenClaw demonstrates where we are: autonomous in execution, but not in understanding.

The limitation of current agents: Current autonomous agents, like OpenClaw, AI coding assistants, research tools, etc., share a fundamental limitation: they require explicit invocation.

You have to tell them to act. You type a message. You give a command. You initiate the interaction.

The vision of ambient intelligence is different: the system observes continuously, understands context, and acts when needed without being asked.

Example:

Current (OpenClaw): I message “book haircut for Saturday” → System accesses booking API → Confirms appointment → Adds to calendar

Post-Interface: System notices it’s been 4 weeks since last haircut → Knows my preferred time is Saturday morning → Books appointment autonomously → I discover it when checking my calendar, or get a subtle reminder Saturday morning

The difference is intention versus instruction. Understanding versus waiting to be told.

The Post-Interface model

Here’s what comes next. Seven principles that define how invisible systems should work.

1. Screens are a phase

Screens emerged because they were necessary, not because they were ideal. We needed a surface to translate the invisible into the visible.

As AI closes the capability gap (understanding location, activity, biometrics, patterns, preferences) the need for visual mediation diminishes.

The evolution:

  • Command line → GUI → Voice/Chat → Ambient intelligence
  • Each step reduces explicit work required from users
  • Each step moves closer to contextual understanding

The endpoint: Systems that understand us directly. Trust so complete that visual confirmation becomes unnecessary.

Screens aren’t the destination. They’re scaffolding. As the building completes, the scaffolding comes down.

The death of Input, the rebirth of Output: We must distinguish between the screen as a control panel and the screen as a canvas.

Humans are biologically wired for visual processing. We decode images instantly, while text requires cognitive decoding. We will always need surfaces to show us a map, to display a memory, or to visualize a complex dataset. We will not stop watching movies or looking at art.

The Post-Interface revolution is about killing Input UI, not Output UI.

  • Input UI (dying): Forms, drop-downs, hamburger menus, settings toggles. These are friction. They exist only because the machine doesn't understand us.
  • Output UI (thriving): High-fidelity data, immersive media, augmented reality overlays. These are value.

In the future, screens won't be things we poke to make things happen. They will be magic mirrors that simply show us what we need to know. The "button" dies. The "pixel" remains.

2. Chat is transitional

Conversational interfaces leverage natural language, which feels natural.

But conversation carries overhead: formulation, articulation, verification, iteration.

As contextual AI improves, the ratio of valuable chat interactions to unnecessary ones inverts:

  • Early stage: Most interactions require chat
  • Mature stage: Most happen silently, chat reserved for true novelty or override

Chat doesn’t disappear: it becomes optional. The emergency exit, not the main entrance.

The shift: From prompt engineering (how do I ask correctly?) to context engineering (how does the system observe correctly?).

3. Intention > Instruction

Current interfaces conflate “what” with “how”. Users must navigate implementation logic to achieve goals.

Want to book travel? Search flights, compare prices, cross-check policies, calculate costs, coordinate timing, manage multiple systems. The outcome is simple: “get to Berlin on these dates”. The execution is complex: death by a thousand clicks.

The intention-first paradigm:

  • User specifies high-level intention
  • System determines optimal execution
  • AI infers goals, evaluates options, executes workflows, handles ambiguity, learns from outcomes

The technical foundation exists. What’s missing is trust—and the design frameworks that enable trust without visibility.

The proper division of labor:

  • Humans: desire, judgment, meaning-making
  • Machines: execution, coordination, optimization

We should each do what we’re best at.

Think of Tony Stark working with JARVIS in Iron Man. Stark doesn’t operate the suit by pulling levers or pressing buttons. He states intentions: “divert power to repulsors", “run diagnostics on arc reactor”, and JARVIS executes. The technical complexity is delegated. Stark focuses on strategy, creativity, and judgment.

That’s not passivity. That’s augmentation.

When I say “intention over instruction", I mean freeing humans to focus on what we’re uniquely good at: vision, creativity, judgment. While AI handles what it’s uniquely good at: execution, coordination, optimization.

The goal isn’t to make us lazy. It’s to make us powerful.

The death of the "Mode": Today, we are forced to choose a "mode" before we act. We decide to open a chat app (to type), or press a microphone button (to speak), or pick up a mouse (to click). We have to pre-configure our input method.

Post-Interface Design abolishes the "mode".

In a truly intelligent system, input is fluid.

  • I might point at a lamp (gesture).
  • And say, "Make it warmer" (voice).
  • While looking at my book (gaze context).

The system doesn't need three separate apps for this. It fuses the signals. It understands that "It" refers to the lamp I'm pointing at, and "Warmer" refers to the light temperature, not the heat, because it knows I am reading.

We don't stop typing. We don't stop talking. We just stop switching. We use whatever signal is most natural in the millisecond, and the machine catches it all.

4. Silence is a feature

Contemporary systems are pathologically chatty. Every action generates feedback: notifications, confirmations, progress bars, success messages.

We’ve created the digital equivalent of a refrigerator that pings you every time it keeps food cold.

This death by a thousand notifications is fundamentally at odds with ambient intelligence. You cannot disappear into the background while constantly announcing your presence.

Silence requires three conditions:

  1. Sufficient reliability that users don’t need confirmation of basic operations
  2. Appropriate scoping of what deserves attention vs. what should remain invisible
  3. Effective failure signaling when intervention is genuinely required

As systems mature, notifications invert from default to exception. Silent operation becomes the primary interface. Sound becomes reserved for genuine anomalies.

The shift: From “prove the system worked” to “assume the system worked unless informed otherwise".

The electricity in your walls doesn’t announce its presence. The foundation of your building doesn’t seek approval. The best infrastructure becomes environmental: so reliable it dissolves into the background of existence.

5. Systems must adapt (The bespoke principle)

The old paradigm assumed users would adapt to systems. We called this the "learning curve". We celebrated power users who memorized shortcuts to navigate broken mental models.

This was backwards. Intelligence is measured by adaptation. If a system requires you to change your behavior to use it, the system has failed.

The off-the-rack problem: Right now, all software is "off-the-rack". Photoshop has 10,000 features because it is designed for all photographers. Gmail looks the same whether you get 5 emails a day or 500.

An off-the-rack suit forces you to adjust your posture to hide the bad fit. "Off-the-rack" software forces you to adjust your thinking to match the developer's logic. We call this "flexible", but it is really just lazy design dressed up as user choice.

True adaptation is behavioral: "Personalization" today is superficial: picking a dark mode or a color theme. That is decoration, not adaptation.

A Post-Interface system is bespoke. It molds to the human:

  • It observes: Learning how you actually use the tools vs. how the designer expected.
  • It trims: Hiding the 9,000 Photoshop features you never touch.
  • It evolves: Reshaping itself as your skills improve.

The shift: We are no longer designing static interfaces. We are creating adaptive frameworks. The deliverable is not a Figma file; it is an algorithm that generates a bespoke experience for a user of one.

Bespoke quality, at the cost of off-the-rack.

6. Designers will design behavior

When interfaces disappear, what remains to design?

Everything.

The choreography of invisible systems is more complex than the arrangement of visible ones. When there are no buttons, no screens, no visual hierarchy. We must design the behavior of intelligence itself.

The new design challenges:

  • Timing: When should a system act vs. wait?
  • Confidence thresholds: How certain must AI be before autonomous action?
  • Failure modes: What happens when invisible systems break?
  • Trust calibration: How do users learn to trust silent systems without constant confirmation? The initial trust barrier is high. We must design “Minimum Viable Transparency”, showing just enough to prove competence, then fading away.
  • Ethical boundaries: Which decisions should never be automated?
  • Multi-agent coordination: How do multiple AI systems collaborate invisibly?

The designer’s expanded role:

  • Behavioral architecture (defining rules for observe/infer/act)
  • Temporal design (orchestrating when things happen)
  • Confidence modeling (thresholds for action vs. confirmation)
  • Failure choreography (graceful degradation)
  • Trust engineering (enabling safe abdication of control)
  • Ethical guardrails (boundaries AI can’t cross)

The deliverable is no longer a prototype: it’s a behavioral model. Principles and constraints governing how an intelligent system operates.

Current tools (Figma, Sketch) are built for spatial arrangement, not behavioral orchestration. We need new frameworks, vocabularies, and tools for designing invisible experiences.

7. The measure of intelligence

An intelligent system is one you forget exists.

Not because it fails to act, but because it acts so precisely aligned with your needs that you attribute its effects to the natural order of things.

Current metrics are wrong:

  • Engagement (time spent, frequency)
  • Satisfaction (NPS, ratings)
  • Performance (speed, uptime)

These measure user interaction with systems, not user outcomes despite systems.

The best post-interface system would score poorly on engagement (users don’t need to engage). It might score poorly on satisfaction surveys (users forget it exists to evaluate).

New metrics needed:

  • Cognitive offload (how much mental effort is eliminated?)
  • Outcome achievement (did goals manifest without intervention?)
  • Attention preservation (how much focus is protected vs. fragmented?)
  • Invisible reliability (how often did things “just work”?)

The measurement paradox

  1. Low intelligence systems generate measurable activity
  2. High intelligence systems generate measurable absence

We must learn to measure negative space. To quantify what didn’t require attention. To value interactions that never needed to happen.

Success becomes invisible. Metrics must evolve to see it anyway.

Implications

Design

These principles require fundamental changes to our practice. We are moving from "Spatial Design" (arranging pixels) to "Temporal Design" (choreographing behavior).

The end of the "User Flow": The linear "User Journey Map" is dead. It assumes a predictable path. We must now design Context Maps. Understanding the cloud of data points around a user and defining how the system reacts to them in real-time.

Accessibility in the Dark: Screenless interfaces are a double-edged sword. They liberate the blind from visual bias, but they can isolate the deaf from information. When the interface is invisible, how do we ensure it is universal? We must design multimodal fallbacks: haptic, auditory, visual, that trigger automatically based on ability.

Generative UI (The Last Designers): We must be honest. We are the last generation of interface designers. In the Post-Interface era, "designing" a screen is an inefficiency. Why pre-bake a layout in Figma for a context that might happen next year?The system should generate the interface on the fly: perfectly adapted to the user’s eyesight, lighting, and urgency. We stop being painters. We become the authors of the algorithm that paints.

New tools for new problems: Figma and Sketch are built for static arrangement. We need tools that simulate behavior, confidence thresholds, and trust calibration. The "Figma for Logic" hasn't been built yet, but we desperately need it.

Business

Post-Interface Design destroys the current SaaS business model.

The death of "Engagement": Today, companies optimize for "Time in App". If I design a system that works perfectly (invisibly), the user never opens the app. In the current model, that looks like failure (Churn). In the new model, that is the definition of success.

Pricing the outcome: We must shift from pricing by "Seat" (access to the tool) to pricing by "Outcome" (value delivered).

  • Old model: Pay $10/month for a calendar app.
  • New model: Pay $0.5/meeting successfully scheduled.

This is harder to measure, but it is the only way to align incentives. If the AI is doing the work, you pay for the result, not the interface.

The differentiation crisis: When the interface disappears, visual branding becomes commodified. You can't compete on "Sleek UI" if no one sees it. Companies will compete on Trust and Reliability. The brand is no longer what it looks like; it is how well it behaves.

Ethics

We are asking users to make a dangerous trade.

The surveillance bargain: To make this work, the system needs to watch you. Constantly. Location, health, communications, finances. You cannot have "anticipatory design" without total surveillance.We are trading privacy for agency. This bargain must be made consciously, not hidden in Terms of Service.

The agency paradox: If the system does everything for me, do I lose the ability to do it myself? We risk creating Learned Helplessness at a societal scale. If the GPS always navigates, I forget how to read a map. We must design "Manual Modes" and "Exit Hatches", not just for debugging, but to keep our human cognitive muscles sharp. We are building systems to augment us, not to replace us.

The serendipity deficit: If systems adapt perfectly to our past behavior, they trap us in a loop of reinforced preference. We risk optimizing for efficiency at the cost of discovery. We must design for randomness: intentional friction that introduces us to the unexpected.

Conclusion

The Post-Interface era arrives not when we build better screens, but when we stop building them entirely.

We are not building systems that think for us. We are building systems that allow us to think clearly for the first time in decades.

The Synthetic Subconscious

The ultimate destination of Post-Interface Design is not an assistant that lives in your pocket; it is an expansion of the mind itself. When the latency between "intent" and "action" drops to zero, the tool ceases to be a tool. It becomes a limb.

When I pick up a cup, I don't "command" my hand to grasp it. I simply want the cup, and the hand obeys. The interface is biological, invisible, and immediate.

The future of AI is not a better chatbot. It is a Synthetic Subconscious.

It is a layer of intelligence that processes the world for you, filters the noise, and presents you not with notifications, but with intuition. You won't "read" that it's going to rain; you will just "know" to take an umbrella. You won't "search" for the answer; the fact will simply arrive in your mind the moment you need it.

We are not building software. We are building an Exocortex.

The era of "Human-Computer Interaction" is ending.The era of "Human-Computer Integration" has begun.

References & Inspiration

  • Golden Krishna, The Best Interface Is No Interface (2015)
  • Amber Case, Calm Technology (2015)
  • Don Norman, The Invisible Computer (1998)
  • Mark Weiser, The Computer for the 21st Century (1991)
  • Bret Victor, The Humane Representation of Thought

About the author

Andrea Bergonzi is a designer based in Paris, currently working on the design of AI-native tools at Sinch.

His path to Post-Interface Design was non-linear. He began as the founder of a health-tech startup (Bio Cardiological Computing), where he patented medical devices that required absolute precision and zero friction: a lesson that shaped his philosophy that "the best interface is the one that disappears".

He has since spent six years designing complex systems for companies like Brevo and Airfinity, always hearing about predictive technology (health, marketing, etc.), reinforcing the idea that people want systems that anticipate their desired outcomes.

He wrote Post-Interface Design because he believes we are trapped in a "Screen Era" that is holding back human potential. He is currently working on the frameworks to get us out.

Want to exchange thoughts?

Connect on LinkedIn or Email Me.

Post-Interface Design

On the abolition of interface in the age of autonomous systems

Andrea Bergonzi ・ FEB 2026 ・ Paris

Read online

Download PDF

Introduction

For thirty years, we’ve known where interfaces are headed: they disappear.

Mark Weiser said it in 1991. Don Norman wrote a whole book about it in 1998. The design community has been talking about “zero UI” and “ambient intelligence” for decades.

The theory is solved. The practice is not.

We have voice assistants that misunderstand, smart homes that need apps, AI that requires prompting. The gap between vision and reality remains enormous: not because we lack ideas, but because we lack the courage and craft to make those ideas real.

This manifesto synthesizes thirty years of research with today’s AI capabilities to propose seven principles for post-interface design: systems that understand rather than wait to be told, that act in silence rather than demand attention, that adapt to humans rather than forcing humans to adapt to them.

The technology is finally ready. The work begins now.

Historical context

The destination has been visible for decades.

Mark Weiser (1991) described ubiquitous computing: technology so embedded it becomes invisible. He imagined a world where computers fade into the background of our lives, becoming as unremarkable as electricity or running water.

Don Norman (1998) wrote The Invisible Computer, arguing that the best technology is the one you don’t think about. He predicted that as machines became more intelligent, explicit interfaces would become unnecessary: even counterproductive.

Amber Case (2015) gave us Calm Technology: systems that respect our attention.

Golden Krishna (2015) challenged us to stop slapping screens on everything.

The 2000s-2010s brought waves of research: calm technology, ambient intelligence, anticipatory design, zero UI. Academic papers, conference talks, design manifestos. All pointing to the same endpoint.

The problem? We couldn’t build it.

The technology wasn’t there. AI couldn’t understand the context. Sensors were expensive. Machine learning was brittle. Voice recognition failed constantly. We had the vision but not the capability.

So we kept optimizing the interface paradigm instead.

We made GUIs more intuitive. We invented touch screens. We added voice commands. We built chat interfaces. Each generation is celebrated as revolutionary, each still fundamentally requiring explicit instruction from users.

But something changed in the 2020s.

AI systems can now understand context from multiple signals: location, time, behavior patterns, conversation history, even visual input. They can plan multi-step workflows. They can learn preferences without explicit configuration.

They can act autonomously with reasonable reliability.

The technical foundation that Weiser and Norman imagined finally exists.

What’s missing isn’t technology anymore. It’s design frameworks for systems that work invisibly.

The failure of interface paradigms

Every interface paradigm has been a compromise. A necessary evil while we waited for machines to get smarter.

Command line (1960s-1980s)

Complete cognitive burden on the user. You had to know the syntax, memorize commands, understand the file system structure. Powerful for experts, unusable for everyone else.

Why it failed: Required users to think like computers.

Graphical User Interface (1980s-present)

Reduced cognitive burden through visual metaphors. You could explore, discover, point and click. Revolutionary in 1984. Still the dominant paradigm in 2026.

Why it’s failing: Screens constrain thought.

They force us to flatten intentions into two-dimensional space, to serialize what might be simultaneous, to make visible what could remain ambient. Every interface is a bottleneck where human need must squeeze through pixels and gestures.

The average person checks their phone 144 times per day. Most interactions under 30 seconds, most requiring visual focus that pulls them from presence into performance.

Voice Interfaces (2010s-present)

Natural language input seemed like the answer. Just speak to your devices like you speak to people.

Why it’s failing: Conversation is still performance.

It requires formulation (converting vague wants into language), articulation (expressing clearly enough for parsing), verification (confirming understanding), and iteration (refining when it failed).

For complex requests, this works. For routine tasks, it’s friction. I shouldn’t need to ask my calendar to schedule my weekly team meeting: it should recognize the pattern. I shouldn’t need to request a ride home after evening class: the system should know my routine.

Chat Interfaces (2020s-present)

LLMs made conversational AI feel magical. ChatGPT, Claude, Gemini: everyone celebrated them as revolutionary.

Why they’re failing: Chat is just another form of explicit instruction.

I’m still negotiating with the machine, still translating my intentions into its language, still doing the work of communication.

True intelligence shouldn’t require me to speak. It should require me to exist, to move through the world with my needs, patterns, and context.

The pattern

Each paradigm reduces cognitive burden slightly. Each is celebrated as a breakthrough. Each still fundamentally requires users to explicitly tell machines what to do.

The interface has softened. It hasn’t disappeared.

Transitional phase: Chat and Agentic systems

We’re living through a specific moment right now: the transition from assistive AI to autonomous AI.

OpenClaw launched in late 2025 and hit 145,000 GitHub stars in ten weeks and was rapidly acquired by OpenAI in early 2026. It’s an AI assistant that runs on your computer, accesses your apps, and executes tasks autonomously through messaging platforms like WhatsApp and Telegram.

You message it: “Schedule coffee with Sarah next week". It checks both calendars, finds mutual availability, sends coordination emails, books the time, sets reminders. You never open a calendar app.

This is progress. It proves autonomous agents can work.

But it’s not the endpoint.

You’re still messaging the system. You’re still looking at screens. You’re still instructing, even if the instructions are high-level.

It’s better than clicking through menus, but it’s not silence. It’s not invisible.

OpenClaw demonstrates where we are: autonomous in execution, but not in understanding.

The limitation of current agents: Current autonomous agents, like OpenClaw, AI coding assistants, research tools, etc., share a fundamental limitation: they require explicit invocation.

You have to tell them to act. You type a message. You give a command. You initiate the interaction.

The vision of ambient intelligence is different: the system observes continuously, understands context, and acts when needed without being asked.

Example:

Current (OpenClaw): I message “book haircut for Saturday” → System accesses booking API → Confirms appointment → Adds to calendar

Post-Interface: System notices it’s been 4 weeks since last haircut → Knows my preferred time is Saturday morning → Books appointment autonomously → I discover it when checking my calendar, or get a subtle reminder Saturday morning

The difference is intention versus instruction. Understanding versus waiting to be told.

The Post-Interface model

Here’s what comes next. Seven principles that define how invisible systems should work.

1. Screens are a phase

Screens emerged because they were necessary, not because they were ideal. We needed a surface to translate the invisible into the visible.

As AI closes the capability gap (understanding location, activity, biometrics, patterns, preferences) the need for visual mediation diminishes.

The evolution:

  • Command line → GUI → Voice/Chat → Ambient intelligence
  • Each step reduces explicit work required from users
  • Each step moves closer to contextual understanding

The endpoint: Systems that understand us directly. Trust so complete that visual confirmation becomes unnecessary.

Screens aren’t the destination. They’re scaffolding. As the building completes, the scaffolding comes down.

The death of Input, the rebirth of Output: We must distinguish between the screen as a control panel and the screen as a canvas.

Humans are biologically wired for visual processing. We decode images instantly, while text requires cognitive decoding. We will always need surfaces to show us a map, to display a memory, or to visualize a complex dataset. We will not stop watching movies or looking at art.

The Post-Interface revolution is about killing Input UI, not Output UI.

  • Input UI (dying): Forms, drop-downs, hamburger menus, settings toggles. These are friction. They exist only because the machine doesn't understand us.
  • Output UI (thriving): High-fidelity data, immersive media, augmented reality overlays. These are value.

In the future, screens won't be things we poke to make things happen. They will be magic mirrors that simply show us what we need to know. The "button" dies. The "pixel" remains.

2. Chat is transitional

Conversational interfaces leverage natural language, which feels natural.

But conversation carries overhead: formulation, articulation, verification, iteration.

As contextual AI improves, the ratio of valuable chat interactions to unnecessary ones inverts:

  • Early stage: Most interactions require chat
  • Mature stage: Most happen silently, chat reserved for true novelty or override

Chat doesn’t disappear: it becomes optional. The emergency exit, not the main entrance.

The shift: From prompt engineering (how do I ask correctly?) to context engineering (how does the system observe correctly?).

3. Intention > Instruction

Current interfaces conflate “what” with “how”. Users must navigate implementation logic to achieve goals.

Want to book travel? Search flights, compare prices, cross-check policies, calculate costs, coordinate timing, manage multiple systems. The outcome is simple: “get to Berlin on these dates”. The execution is complex: death by a thousand clicks.

The intention-first paradigm:

  • User specifies high-level intention
  • System determines optimal execution
  • AI infers goals, evaluates options, executes workflows, handles ambiguity, learns from outcomes

The technical foundation exists. What’s missing is trust—and the design frameworks that enable trust without visibility.

The proper division of labor:

  • Humans: desire, judgment, meaning-making
  • Machines: execution, coordination, optimization

We should each do what we’re best at.

Think of Tony Stark working with JARVIS in Iron Man. Stark doesn’t operate the suit by pulling levers or pressing buttons. He states intentions: “divert power to repulsors", “run diagnostics on arc reactor”, and JARVIS executes. The technical complexity is delegated. Stark focuses on strategy, creativity, and judgment.

That’s not passivity. That’s augmentation.

When I say “intention over instruction", I mean freeing humans to focus on what we’re uniquely good at: vision, creativity, judgment. While AI handles what it’s uniquely good at: execution, coordination, optimization.

The goal isn’t to make us lazy. It’s to make us powerful.

The death of the "Mode": Today, we are forced to choose a "mode" before we act. We decide to open a chat app (to type), or press a microphone button (to speak), or pick up a mouse (to click). We have to pre-configure our input method.

Post-Interface Design abolishes the "mode".

In a truly intelligent system, input is fluid.

  • I might point at a lamp (gesture).
  • And say, "Make it warmer" (voice).
  • While looking at my book (gaze context).

The system doesn't need three separate apps for this. It fuses the signals. It understands that "It" refers to the lamp I'm pointing at, and "Warmer" refers to the light temperature, not the heat, because it knows I am reading.

We don't stop typing. We don't stop talking. We just stop switching. We use whatever signal is most natural in the millisecond, and the machine catches it all.

4. Silence is a feature

Contemporary systems are pathologically chatty. Every action generates feedback: notifications, confirmations, progress bars, success messages.

We’ve created the digital equivalent of a refrigerator that pings you every time it keeps food cold.

This death by a thousand notifications is fundamentally at odds with ambient intelligence. You cannot disappear into the background while constantly announcing your presence.

Silence requires three conditions:

  1. Sufficient reliability that users don’t need confirmation of basic operations
  2. Appropriate scoping of what deserves attention vs. what should remain invisible
  3. Effective failure signaling when intervention is genuinely required

As systems mature, notifications invert from default to exception. Silent operation becomes the primary interface. Sound becomes reserved for genuine anomalies.

The shift: From “prove the system worked” to “assume the system worked unless informed otherwise".

The electricity in your walls doesn’t announce its presence. The foundation of your building doesn’t seek approval. The best infrastructure becomes environmental: so reliable it dissolves into the background of existence.

5. Systems must adapt (The bespoke principle)

The old paradigm assumed users would adapt to systems. We called this the "learning curve". We celebrated power users who memorized shortcuts to navigate broken mental models.

This was backwards. Intelligence is measured by adaptation. If a system requires you to change your behavior to use it, the system has failed.

The off-the-rack problem: Right now, all software is "off-the-rack". Photoshop has 10,000 features because it is designed for all photographers. Gmail looks the same whether you get 5 emails a day or 500.

An off-the-rack suit forces you to adjust your posture to hide the bad fit. "Off-the-rack" software forces you to adjust your thinking to match the developer's logic. We call this "flexible", but it is really just lazy design dressed up as user choice.

True adaptation is behavioral: "Personalization" today is superficial: picking a dark mode or a color theme. That is decoration, not adaptation.

A Post-Interface system is bespoke. It molds to the human:

  • It observes: Learning how you actually use the tools vs. how the designer expected.
  • It trims: Hiding the 9,000 Photoshop features you never touch.
  • It evolves: Reshaping itself as your skills improve.

The shift: We are no longer designing static interfaces. We are creating adaptive frameworks. The deliverable is not a Figma file; it is an algorithm that generates a bespoke experience for a user of one.

Bespoke quality, at the cost of off-the-rack.

6. Designers will design behavior

When interfaces disappear, what remains to design?

Everything.

The choreography of invisible systems is more complex than the arrangement of visible ones. When there are no buttons, no screens, no visual hierarchy. We must design the behavior of intelligence itself.

The new design challenges:

  • Timing: When should a system act vs. wait?
  • Confidence thresholds: How certain must AI be before autonomous action?
  • Failure modes: What happens when invisible systems break?
  • Trust calibration: How do users learn to trust silent systems without constant confirmation? The initial trust barrier is high. We must design “Minimum Viable Transparency”, showing just enough to prove competence, then fading away.
  • Ethical boundaries: Which decisions should never be automated?
  • Multi-agent coordination: How do multiple AI systems collaborate invisibly?

The designer’s expanded role:

  • Behavioral architecture (defining rules for observe/infer/act)
  • Temporal design (orchestrating when things happen)
  • Confidence modeling (thresholds for action vs. confirmation)
  • Failure choreography (graceful degradation)
  • Trust engineering (enabling safe abdication of control)
  • Ethical guardrails (boundaries AI can’t cross)

The deliverable is no longer a prototype: it’s a behavioral model. Principles and constraints governing how an intelligent system operates.

Current tools (Figma, Sketch) are built for spatial arrangement, not behavioral orchestration. We need new frameworks, vocabularies, and tools for designing invisible experiences.

7. The measure of intelligence

An intelligent system is one you forget exists.

Not because it fails to act, but because it acts so precisely aligned with your needs that you attribute its effects to the natural order of things.

Current metrics are wrong:

  • Engagement (time spent, frequency)
  • Satisfaction (NPS, ratings)
  • Performance (speed, uptime)

These measure user interaction with systems, not user outcomes despite systems.

The best post-interface system would score poorly on engagement (users don’t need to engage). It might score poorly on satisfaction surveys (users forget it exists to evaluate).

New metrics needed:

  • Cognitive offload (how much mental effort is eliminated?)
  • Outcome achievement (did goals manifest without intervention?)
  • Attention preservation (how much focus is protected vs. fragmented?)
  • Invisible reliability (how often did things “just work”?)

The measurement paradox

  1. Low intelligence systems generate measurable activity
  2. High intelligence systems generate measurable absence

We must learn to measure negative space. To quantify what didn’t require attention. To value interactions that never needed to happen.

Success becomes invisible. Metrics must evolve to see it anyway.

Implications

Design

These principles require fundamental changes to our practice. We are moving from "Spatial Design" (arranging pixels) to "Temporal Design" (choreographing behavior).

The end of the "User Flow": The linear "User Journey Map" is dead. It assumes a predictable path. We must now design Context Maps. Understanding the cloud of data points around a user and defining how the system reacts to them in real-time.

Accessibility in the Dark: Screenless interfaces are a double-edged sword. They liberate the blind from visual bias, but they can isolate the deaf from information. When the interface is invisible, how do we ensure it is universal? We must design multimodal fallbacks: haptic, auditory, visual, that trigger automatically based on ability.

Generative UI (The Last Designers): We must be honest. We are the last generation of interface designers. In the Post-Interface era, "designing" a screen is an inefficiency. Why pre-bake a layout in Figma for a context that might happen next year?The system should generate the interface on the fly: perfectly adapted to the user’s eyesight, lighting, and urgency. We stop being painters. We become the authors of the algorithm that paints.

New tools for new problems: Figma and Sketch are built for static arrangement. We need tools that simulate behavior, confidence thresholds, and trust calibration. The "Figma for Logic" hasn't been built yet, but we desperately need it.

Business

Post-Interface Design destroys the current SaaS business model.

The death of "Engagement": Today, companies optimize for "Time in App". If I design a system that works perfectly (invisibly), the user never opens the app. In the current model, that looks like failure (Churn). In the new model, that is the definition of success.

Pricing the outcome: We must shift from pricing by "Seat" (access to the tool) to pricing by "Outcome" (value delivered).

  • Old model: Pay $10/month for a calendar app.
  • New model: Pay $0.5/meeting successfully scheduled.

This is harder to measure, but it is the only way to align incentives. If the AI is doing the work, you pay for the result, not the interface.

The differentiation crisis: When the interface disappears, visual branding becomes commodified. You can't compete on "Sleek UI" if no one sees it. Companies will compete on Trust and Reliability. The brand is no longer what it looks like; it is how well it behaves.

Ethics

We are asking users to make a dangerous trade.

The surveillance bargain: To make this work, the system needs to watch you. Constantly. Location, health, communications, finances. You cannot have "anticipatory design" without total surveillance.We are trading privacy for agency. This bargain must be made consciously, not hidden in Terms of Service.

The agency paradox: If the system does everything for me, do I lose the ability to do it myself? We risk creating Learned Helplessness at a societal scale. If the GPS always navigates, I forget how to read a map. We must design "Manual Modes" and "Exit Hatches", not just for debugging, but to keep our human cognitive muscles sharp. We are building systems to augment us, not to replace us.

The serendipity deficit: If systems adapt perfectly to our past behavior, they trap us in a loop of reinforced preference. We risk optimizing for efficiency at the cost of discovery. We must design for randomness: intentional friction that introduces us to the unexpected.

Conclusion

The Post-Interface era arrives not when we build better screens, but when we stop building them entirely.

We are not building systems that think for us. We are building systems that allow us to think clearly for the first time in decades.

The Synthetic Subconscious

The ultimate destination of Post-Interface Design is not an assistant that lives in your pocket; it is an expansion of the mind itself. When the latency between "intent" and "action" drops to zero, the tool ceases to be a tool. It becomes a limb.

When I pick up a cup, I don't "command" my hand to grasp it. I simply want the cup, and the hand obeys. The interface is biological, invisible, and immediate.

The future of AI is not a better chatbot. It is a Synthetic Subconscious.

It is a layer of intelligence that processes the world for you, filters the noise, and presents you not with notifications, but with intuition. You won't "read" that it's going to rain; you will just "know" to take an umbrella. You won't "search" for the answer; the fact will simply arrive in your mind the moment you need it.

We are not building software. We are building an Exocortex.

The era of "Human-Computer Interaction" is ending.The era of "Human-Computer Integration" has begun.

References & Inspiration

  • Golden Krishna, The Best Interface Is No Interface (2015)
  • Amber Case, Calm Technology (2015)
  • Don Norman, The Invisible Computer (1998)
  • Mark Weiser, The Computer for the 21st Century (1991)
  • Bret Victor, The Humane Representation of Thought

About the author

Andrea Bergonzi is a designer based in Paris, currently working on the design of AI-native tools at Sinch.

His path to Post-Interface Design was non-linear. He began as the founder of a health-tech startup (Bio Cardiological Computing), where he patented medical devices that required absolute precision and zero friction: a lesson that shaped his philosophy that "the best interface is the one that disappears".

He has since spent six years designing complex systems for companies like Brevo and Airfinity, always hearing about predictive technology (health, marketing, etc.), reinforcing the idea that people want systems that anticipate their desired outcomes.

He wrote Post-Interface Design because he believes we are trapped in a "Screen Era" that is holding back human potential. He is currently working on the frameworks to get us out.

Want to exchange thoughts?

Connect on LinkedIn or Email Me.

Post-Interface Design

On the abolition of interface in the age of autonomous systems

Andrea Bergonzi ・ FEB 2026 ・ Paris

Read online

Download PDF

Introduction

For thirty years, we’ve known where interfaces are headed: they disappear.

Mark Weiser said it in 1991. Don Norman wrote a whole book about it in 1998. The design community has been talking about “zero UI” and “ambient intelligence” for decades.

The theory is solved. The practice is not.

We have voice assistants that misunderstand, smart homes that need apps, AI that requires prompting. The gap between vision and reality remains enormous: not because we lack ideas, but because we lack the courage and craft to make those ideas real.

This manifesto synthesizes thirty years of research with today’s AI capabilities to propose seven principles for post-interface design: systems that understand rather than wait to be told, that act in silence rather than demand attention, that adapt to humans rather than forcing humans to adapt to them.

The technology is finally ready. The work begins now.

Historical context

The destination has been visible for decades.

Mark Weiser (1991) described ubiquitous computing: technology so embedded it becomes invisible. He imagined a world where computers fade into the background of our lives, becoming as unremarkable as electricity or running water.

Don Norman (1998) wrote The Invisible Computer, arguing that the best technology is the one you don’t think about. He predicted that as machines became more intelligent, explicit interfaces would become unnecessary: even counterproductive.

Amber Case (2015) gave us Calm Technology: systems that respect our attention.

Golden Krishna (2015) challenged us to stop slapping screens on everything.

The 2000s-2010s brought waves of research: calm technology, ambient intelligence, anticipatory design, zero UI. Academic papers, conference talks, design manifestos. All pointing to the same endpoint.

The problem? We couldn’t build it.

The technology wasn’t there. AI couldn’t understand the context. Sensors were expensive. Machine learning was brittle. Voice recognition failed constantly. We had the vision but not the capability.

So we kept optimizing the interface paradigm instead.

We made GUIs more intuitive. We invented touch screens. We added voice commands. We built chat interfaces. Each generation is celebrated as revolutionary, each still fundamentally requiring explicit instruction from users.

But something changed in the 2020s.

AI systems can now understand context from multiple signals: location, time, behavior patterns, conversation history, even visual input. They can plan multi-step workflows. They can learn preferences without explicit configuration.

They can act autonomously with reasonable reliability.

The technical foundation that Weiser and Norman imagined finally exists.

What’s missing isn’t technology anymore. It’s design frameworks for systems that work invisibly.

The failure of interface paradigms

Every interface paradigm has been a compromise. A necessary evil while we waited for machines to get smarter.

Command line (1960s-1980s)

Complete cognitive burden on the user. You had to know the syntax, memorize commands, understand the file system structure. Powerful for experts, unusable for everyone else.

Why it failed: Required users to think like computers.

Graphical User Interface (1980s-present)

Reduced cognitive burden through visual metaphors. You could explore, discover, point and click. Revolutionary in 1984. Still the dominant paradigm in 2026.

Why it’s failing: Screens constrain thought.

They force us to flatten intentions into two-dimensional space, to serialize what might be simultaneous, to make visible what could remain ambient. Every interface is a bottleneck where human need must squeeze through pixels and gestures.

The average person checks their phone 144 times per day. Most interactions under 30 seconds, most requiring visual focus that pulls them from presence into performance.

Voice Interfaces (2010s-present)

Natural language input seemed like the answer. Just speak to your devices like you speak to people.

Why it’s failing: Conversation is still performance.

It requires formulation (converting vague wants into language), articulation (expressing clearly enough for parsing), verification (confirming understanding), and iteration (refining when it failed).

For complex requests, this works. For routine tasks, it’s friction. I shouldn’t need to ask my calendar to schedule my weekly team meeting: it should recognize the pattern. I shouldn’t need to request a ride home after evening class: the system should know my routine.

Chat Interfaces (2020s-present)

LLMs made conversational AI feel magical. ChatGPT, Claude, Gemini: everyone celebrated them as revolutionary.

Why they’re failing: Chat is just another form of explicit instruction.

I’m still negotiating with the machine, still translating my intentions into its language, still doing the work of communication.

True intelligence shouldn’t require me to speak. It should require me to exist, to move through the world with my needs, patterns, and context.

The pattern

Each paradigm reduces cognitive burden slightly. Each is celebrated as a breakthrough. Each still fundamentally requires users to explicitly tell machines what to do.

The interface has softened. It hasn’t disappeared.

Transitional phase: Chat and Agentic systems

We’re living through a specific moment right now: the transition from assistive AI to autonomous AI.

OpenClaw launched in late 2025 and hit 145,000 GitHub stars in ten weeks and was rapidly acquired by OpenAI in early 2026. It’s an AI assistant that runs on your computer, accesses your apps, and executes tasks autonomously through messaging platforms like WhatsApp and Telegram.

You message it: “Schedule coffee with Sarah next week". It checks both calendars, finds mutual availability, sends coordination emails, books the time, sets reminders. You never open a calendar app.

This is progress. It proves autonomous agents can work.

But it’s not the endpoint.

You’re still messaging the system. You’re still looking at screens. You’re still instructing, even if the instructions are high-level.

It’s better than clicking through menus, but it’s not silence. It’s not invisible.

OpenClaw demonstrates where we are: autonomous in execution, but not in understanding.

The limitation of current agents: Current autonomous agents, like OpenClaw, AI coding assistants, research tools, etc., share a fundamental limitation: they require explicit invocation.

You have to tell them to act. You type a message. You give a command. You initiate the interaction.

The vision of ambient intelligence is different: the system observes continuously, understands context, and acts when needed without being asked.

Example:

Current (OpenClaw): I message “book haircut for Saturday” → System accesses booking API → Confirms appointment → Adds to calendar

Post-Interface: System notices it’s been 4 weeks since last haircut → Knows my preferred time is Saturday morning → Books appointment autonomously → I discover it when checking my calendar, or get a subtle reminder Saturday morning

The difference is intention versus instruction. Understanding versus waiting to be told.

The Post-Interface model

Here’s what comes next. Seven principles that define how invisible systems should work.

1. Screens are a phase

Screens emerged because they were necessary, not because they were ideal. We needed a surface to translate the invisible into the visible.

As AI closes the capability gap (understanding location, activity, biometrics, patterns, preferences) the need for visual mediation diminishes.

The evolution:

  • Command line → GUI → Voice/Chat → Ambient intelligence
  • Each step reduces explicit work required from users
  • Each step moves closer to contextual understanding

The endpoint: Systems that understand us directly. Trust so complete that visual confirmation becomes unnecessary.

Screens aren’t the destination. They’re scaffolding. As the building completes, the scaffolding comes down.

The death of Input, the rebirth of Output: We must distinguish between the screen as a control panel and the screen as a canvas.

Humans are biologically wired for visual processing. We decode images instantly, while text requires cognitive decoding. We will always need surfaces to show us a map, to display a memory, or to visualize a complex dataset. We will not stop watching movies or looking at art.

The Post-Interface revolution is about killing Input UI, not Output UI.

  • Input UI (dying): Forms, drop-downs, hamburger menus, settings toggles. These are friction. They exist only because the machine doesn't understand us.
  • Output UI (thriving): High-fidelity data, immersive media, augmented reality overlays. These are value.

In the future, screens won't be things we poke to make things happen. They will be magic mirrors that simply show us what we need to know. The "button" dies. The "pixel" remains.

2. Chat is transitional

Conversational interfaces leverage natural language, which feels natural.

But conversation carries overhead: formulation, articulation, verification, iteration.

As contextual AI improves, the ratio of valuable chat interactions to unnecessary ones inverts:

  • Early stage: Most interactions require chat
  • Mature stage: Most happen silently, chat reserved for true novelty or override

Chat doesn’t disappear: it becomes optional. The emergency exit, not the main entrance.

The shift: From prompt engineering (how do I ask correctly?) to context engineering (how does the system observe correctly?).

3. Intention > Instruction

Current interfaces conflate “what” with “how”. Users must navigate implementation logic to achieve goals.

Want to book travel? Search flights, compare prices, cross-check policies, calculate costs, coordinate timing, manage multiple systems. The outcome is simple: “get to Berlin on these dates”. The execution is complex: death by a thousand clicks.

The intention-first paradigm:

  • User specifies high-level intention
  • System determines optimal execution
  • AI infers goals, evaluates options, executes workflows, handles ambiguity, learns from outcomes

The technical foundation exists. What’s missing is trust—and the design frameworks that enable trust without visibility.

The proper division of labor:

  • Humans: desire, judgment, meaning-making
  • Machines: execution, coordination, optimization

We should each do what we’re best at.

Think of Tony Stark working with JARVIS in Iron Man. Stark doesn’t operate the suit by pulling levers or pressing buttons. He states intentions: “divert power to repulsors", “run diagnostics on arc reactor”, and JARVIS executes. The technical complexity is delegated. Stark focuses on strategy, creativity, and judgment.

That’s not passivity. That’s augmentation.

When I say “intention over instruction", I mean freeing humans to focus on what we’re uniquely good at: vision, creativity, judgment. While AI handles what it’s uniquely good at: execution, coordination, optimization.

The goal isn’t to make us lazy. It’s to make us powerful.

The death of the "Mode": Today, we are forced to choose a "mode" before we act. We decide to open a chat app (to type), or press a microphone button (to speak), or pick up a mouse (to click). We have to pre-configure our input method.

Post-Interface Design abolishes the "mode".

In a truly intelligent system, input is fluid.

  • I might point at a lamp (gesture).
  • And say, "Make it warmer" (voice).
  • While looking at my book (gaze context).

The system doesn't need three separate apps for this. It fuses the signals. It understands that "It" refers to the lamp I'm pointing at, and "Warmer" refers to the light temperature, not the heat, because it knows I am reading.

We don't stop typing. We don't stop talking. We just stop switching. We use whatever signal is most natural in the millisecond, and the machine catches it all.

4. Silence is a feature

Contemporary systems are pathologically chatty. Every action generates feedback: notifications, confirmations, progress bars, success messages.

We’ve created the digital equivalent of a refrigerator that pings you every time it keeps food cold.

This death by a thousand notifications is fundamentally at odds with ambient intelligence. You cannot disappear into the background while constantly announcing your presence.

Silence requires three conditions:

  1. Sufficient reliability that users don’t need confirmation of basic operations
  2. Appropriate scoping of what deserves attention vs. what should remain invisible
  3. Effective failure signaling when intervention is genuinely required

As systems mature, notifications invert from default to exception. Silent operation becomes the primary interface. Sound becomes reserved for genuine anomalies.

The shift: From “prove the system worked” to “assume the system worked unless informed otherwise".

The electricity in your walls doesn’t announce its presence. The foundation of your building doesn’t seek approval. The best infrastructure becomes environmental: so reliable it dissolves into the background of existence.

5. Systems must adapt (The bespoke principle)

The old paradigm assumed users would adapt to systems. We called this the "learning curve". We celebrated power users who memorized shortcuts to navigate broken mental models.

This was backwards. Intelligence is measured by adaptation. If a system requires you to change your behavior to use it, the system has failed.

The off-the-rack problem: Right now, all software is "off-the-rack". Photoshop has 10,000 features because it is designed for all photographers. Gmail looks the same whether you get 5 emails a day or 500.

An off-the-rack suit forces you to adjust your posture to hide the bad fit. "Off-the-rack" software forces you to adjust your thinking to match the developer's logic. We call this "flexible", but it is really just lazy design dressed up as user choice.

True adaptation is behavioral: "Personalization" today is superficial: picking a dark mode or a color theme. That is decoration, not adaptation.

A Post-Interface system is bespoke. It molds to the human:

  • It observes: Learning how you actually use the tools vs. how the designer expected.
  • It trims: Hiding the 9,000 Photoshop features you never touch.
  • It evolves: Reshaping itself as your skills improve.

The shift: We are no longer designing static interfaces. We are creating adaptive frameworks. The deliverable is not a Figma file; it is an algorithm that generates a bespoke experience for a user of one.

Bespoke quality, at the cost of off-the-rack.

6. Designers will design behavior

When interfaces disappear, what remains to design?

Everything.

The choreography of invisible systems is more complex than the arrangement of visible ones. When there are no buttons, no screens, no visual hierarchy. We must design the behavior of intelligence itself.

The new design challenges:

  • Timing: When should a system act vs. wait?
  • Confidence thresholds: How certain must AI be before autonomous action?
  • Failure modes: What happens when invisible systems break?
  • Trust calibration: How do users learn to trust silent systems without constant confirmation? The initial trust barrier is high. We must design “Minimum Viable Transparency”, showing just enough to prove competence, then fading away.
  • Ethical boundaries: Which decisions should never be automated?
  • Multi-agent coordination: How do multiple AI systems collaborate invisibly?

The designer’s expanded role:

  • Behavioral architecture (defining rules for observe/infer/act)
  • Temporal design (orchestrating when things happen)
  • Confidence modeling (thresholds for action vs. confirmation)
  • Failure choreography (graceful degradation)
  • Trust engineering (enabling safe abdication of control)
  • Ethical guardrails (boundaries AI can’t cross)

The deliverable is no longer a prototype: it’s a behavioral model. Principles and constraints governing how an intelligent system operates.

Current tools (Figma, Sketch) are built for spatial arrangement, not behavioral orchestration. We need new frameworks, vocabularies, and tools for designing invisible experiences.

7. The measure of intelligence

An intelligent system is one you forget exists.

Not because it fails to act, but because it acts so precisely aligned with your needs that you attribute its effects to the natural order of things.

Current metrics are wrong:

  • Engagement (time spent, frequency)
  • Satisfaction (NPS, ratings)
  • Performance (speed, uptime)

These measure user interaction with systems, not user outcomes despite systems.

The best post-interface system would score poorly on engagement (users don’t need to engage). It might score poorly on satisfaction surveys (users forget it exists to evaluate).

New metrics needed:

  • Cognitive offload (how much mental effort is eliminated?)
  • Outcome achievement (did goals manifest without intervention?)
  • Attention preservation (how much focus is protected vs. fragmented?)
  • Invisible reliability (how often did things “just work”?)

The measurement paradox

  1. Low intelligence systems generate measurable activity
  2. High intelligence systems generate measurable absence

We must learn to measure negative space. To quantify what didn’t require attention. To value interactions that never needed to happen.

Success becomes invisible. Metrics must evolve to see it anyway.

Implications

Design

These principles require fundamental changes to our practice. We are moving from "Spatial Design" (arranging pixels) to "Temporal Design" (choreographing behavior).

The end of the "User Flow": The linear "User Journey Map" is dead. It assumes a predictable path. We must now design Context Maps. Understanding the cloud of data points around a user and defining how the system reacts to them in real-time.

Accessibility in the Dark: Screenless interfaces are a double-edged sword. They liberate the blind from visual bias, but they can isolate the deaf from information. When the interface is invisible, how do we ensure it is universal? We must design multimodal fallbacks: haptic, auditory, visual, that trigger automatically based on ability.

Generative UI (The Last Designers): We must be honest. We are the last generation of interface designers. In the Post-Interface era, "designing" a screen is an inefficiency. Why pre-bake a layout in Figma for a context that might happen next year?The system should generate the interface on the fly: perfectly adapted to the user’s eyesight, lighting, and urgency. We stop being painters. We become the authors of the algorithm that paints.

New tools for new problems: Figma and Sketch are built for static arrangement. We need tools that simulate behavior, confidence thresholds, and trust calibration. The "Figma for Logic" hasn't been built yet, but we desperately need it.

Business

Post-Interface Design destroys the current SaaS business model.

The death of "Engagement": Today, companies optimize for "Time in App". If I design a system that works perfectly (invisibly), the user never opens the app. In the current model, that looks like failure (Churn). In the new model, that is the definition of success.

Pricing the outcome: We must shift from pricing by "Seat" (access to the tool) to pricing by "Outcome" (value delivered).

  • Old model: Pay $10/month for a calendar app.
  • New model: Pay $0.5/meeting successfully scheduled.

This is harder to measure, but it is the only way to align incentives. If the AI is doing the work, you pay for the result, not the interface.

The differentiation crisis: When the interface disappears, visual branding becomes commodified. You can't compete on "Sleek UI" if no one sees it. Companies will compete on Trust and Reliability. The brand is no longer what it looks like; it is how well it behaves.

Ethics

We are asking users to make a dangerous trade.

The surveillance bargain: To make this work, the system needs to watch you. Constantly. Location, health, communications, finances. You cannot have "anticipatory design" without total surveillance.We are trading privacy for agency. This bargain must be made consciously, not hidden in Terms of Service.

The agency paradox: If the system does everything for me, do I lose the ability to do it myself? We risk creating Learned Helplessness at a societal scale. If the GPS always navigates, I forget how to read a map. We must design "Manual Modes" and "Exit Hatches", not just for debugging, but to keep our human cognitive muscles sharp. We are building systems to augment us, not to replace us.

The serendipity deficit: If systems adapt perfectly to our past behavior, they trap us in a loop of reinforced preference. We risk optimizing for efficiency at the cost of discovery. We must design for randomness: intentional friction that introduces us to the unexpected.

Conclusion

The Post-Interface era arrives not when we build better screens, but when we stop building them entirely.

We are not building systems that think for us. We are building systems that allow us to think clearly for the first time in decades.

The Synthetic Subconscious

The ultimate destination of Post-Interface Design is not an assistant that lives in your pocket; it is an expansion of the mind itself. When the latency between "intent" and "action" drops to zero, the tool ceases to be a tool. It becomes a limb.

When I pick up a cup, I don't "command" my hand to grasp it. I simply want the cup, and the hand obeys. The interface is biological, invisible, and immediate.

The future of AI is not a better chatbot. It is a Synthetic Subconscious.

It is a layer of intelligence that processes the world for you, filters the noise, and presents you not with notifications, but with intuition. You won't "read" that it's going to rain; you will just "know" to take an umbrella. You won't "search" for the answer; the fact will simply arrive in your mind the moment you need it.

We are not building software. We are building an Exocortex.

The era of "Human-Computer Interaction" is ending.The era of "Human-Computer Integration" has begun.

References & Inspiration

  • Golden Krishna, The Best Interface Is No Interface (2015)
  • Amber Case, Calm Technology (2015)
  • Don Norman, The Invisible Computer (1998)
  • Mark Weiser, The Computer for the 21st Century (1991)
  • Bret Victor, The Humane Representation of Thought

About the author

Andrea Bergonzi is a designer based in Paris, currently working on the design of AI-native tools at Sinch.

His path to Post-Interface Design was non-linear. He began as the founder of a health-tech startup (Bio Cardiological Computing), where he patented medical devices that required absolute precision and zero friction: a lesson that shaped his philosophy that "the best interface is the one that disappears".

He has since spent six years designing complex systems for companies like Brevo and Airfinity, always hearing about predictive technology (health, marketing, etc.), reinforcing the idea that people want systems that anticipate their desired outcomes.

He wrote Post-Interface Design because he believes we are trapped in a "Screen Era" that is holding back human potential. He is currently working on the frameworks to get us out.

Want to exchange thoughts?

Connect on LinkedIn or Email Me.