AI
Beyond Linear's Agent Guidelines: How Quotient Thinks About Human-AI Collaboration

Linear just published their Agent Interaction Guidelines (AIG) – a foundational framework for designing agent interactions that integrate naturally into human workflows. As longtime admirers of Linear's work, we're not surprised to see them leading the conversation on this critical topic.
Linear has consistently set the gold standard for software design – from their legendary product velocity to their obsessive attention to user experience details. Their perspective on agent-human interaction design is characteristically spot-on, establishing principles that every company building AI systems should study and adopt.
At Quotient, we've been grappling with these same challenges as we've built our agentic marketing platform. Through months of designing, testing, and refining our AI agents' interactions with users, we've discovered additional principles that complement Linear's excellent foundation. While Linear's guidelines focus primarily on transparency, accountability, and user control, our experience has taught us there are several other critical dimensions to consider when building truly collaborative human-AI systems.
In this article, we want to propose a few addenda to Linear's guidelines – additional principles born from our unique position as both builders and daily users of agentic software. These aren't corrections to Linear's work, but rather complementary insights that address some of the nuances we've encountered in creating AI agents that don't just execute tasks, but actively collaborate in complex, multi-step business processes.
Quotient's Additional Principles
An agent should be able to do anything a human user can do
Anything that the human can do in the UI, the agent should also be able to do, and vice versa. This is something we painstakingly designed in Quotient, and it goes beyond Linear's principle that agents should "inhabit the platform natively."
In the beginning, Quotient was actually only operable by talking to an agent - a little like Claude artifacts. But we quickly realized that humans often want to tweak an agent's output, and that only allowing them to do so by chatting with the agent was limiting. Some things are just easier to do in a WYSIWYG editor than they are conversationally (and vice-versa). We wanted the best of both worlds.
So we constructed rich editors for blogs, emails, customer segments, and more, and we gave agents and humans tools for editing the same underlying data structures. The result is a truly collaborative system where users can seamlessly switch between conversational direction ("make this email more casual") and direct manipulation (dragging components, adjusting layouts) depending on what feels most natural for the specific task at hand.
For example, if an agent writes a blog post draft, the human can jump into our visual editor to adjust headings, reorder sections, or fine-tune formatting without having to describe those changes conversationally. Conversely, if a human starts styling an email template, they can ask an agent to "make the call-to-action button more prominent" and the agent can directly modify the same design elements the human was working with. Both parties are operating on identical underlying data structures, just through different interfaces.
An agent should see what the user sees and know what the user knows
As much as possible, the agent should share the same context that the human has. If the human is looking at a screen and talking to an agent, they expect the agent to be able to "see" what's on the screen. Similarly, they expect that the agent will remember their recent interactions and have some knowledge of what's going on inside the application much like a human collaborator would.
This is no easy task, of course, because agents do not "see" or "remember" in the same fashion that humans do. Such knowledge must be fed to them at runtime via sophisticated context engineering. For Quotient, this means educating the agent about what page the user is on, what UI elements they are referring to when asking for changes, and the state of various parts of the app (e.g. what marketing campaigns are currently ongoing).
The payoff is enormous: when an agent understands the full context of what you're working on, conversations become dramatically more natural and productive. Instead of constantly re-explaining background information, you can jump straight into collaborative problem-solving.
An agent should work in a multiplayer environment by default
Agentic apps must be multiplayer by default. Multiplayer apps became popular because, when two people are working on the same thing, it's crucial that they can see each other's updates and not accidentally overwrite each other's changes. This is why we all prefer Google Docs to Microsoft Word, and Figma to Sketch.
Now, all apps have a minimum of two users - the human and the agent. They need to see each other's changes instantly on the screen, just like two human collaborators. The agent also inherently works on the server side. This creates a need to constantly synchronize changes from a server to all clients - in other words, to make all apps multiplayer.
An agent should continue working regardless of UI state
When a human closes their browser, or switches to a different part of the app, or starts a new conversation with a different agent, the previous agent's work should be uninterrupted. Like a good human coworker, the agent should continue working even when its (human) manager isn't looking over its shoulder.
From a technical perspective, this meant that Quotient needed to architect our agent framework as an asynchronous, recursive message queue that sends messages over websockets, as opposed to an HTTP request/response model. The result is agents that truly work independently while keeping humans informed of their progress through real-time updates.
Building the Future of Human-AI Collaboration
Linear's Agent Interaction Guidelines represent a crucial first step in defining how humans and AI should work together. Their principles around transparency, accountability, and user control create the essential foundation that every agentic system needs to get right.
Our additional principles—bidirectional interface capability, shared context, multiplayer architecture, and persistent operation—emerge from the unique challenges of building truly collaborative AI systems. These aren't theoretical concerns but practical lessons learned from months of daily use, iteration, and refinement.
We're still in the early days of figuring out how humans and AI agents should collaborate. Like Linear's guidelines, our principles will undoubtedly evolve as we learn more about what works in practice. But one thing is clear: the companies that get human-AI collaboration right will have a massive advantage in the years ahead.
If you're building agentic software, we'd love to hear about your own lessons learned. The conversation Linear started is just the beginning, and it's going to take the entire community to figure out the best practices for this new paradigm of human-computer interaction.