An on-device model of the user's working state, built for AI copilots and agentic systems. It reads interaction patterns and outputs a compact state vector that agents can act on in real time.
Modern copilots already read your code, your docs, your tickets, your history. They have memory, tools, and the ability to act on your behalf. What they can't see is you.
Today's agents behave the same way whether the user is in deep, sustained focus or compulsively switching tabs. They don't notice the moment someone gets stuck — the loops, undos, repeated attempts in the same place. They have no signal for whether now is a good moment to step in or a terrible one.
The result shows up in the metrics. Suggestion accept rates plateau. Some users disable the copilot entirely. Others ignore most of what it says. Product teams are afraid to make agents more proactive because they know it will hurt the experience for the wrong users at the wrong time.
Copilots fire the same way at a focused user and a fragmented one.
No signal for stuck states — the loops, the undos, the dead-end revisits.
The "agency" conversation is about architecture and tools, not about the human running the system.
We're opening private access to Qualia-1 for a small number of teams already running — or actively building — AI copilots and agentic systems inside IDEs and AI workspaces.
If you want to test how working-state signals change accept rate, dwell time, and user retention in your product, request a pilot below.
Qualia-1 runs locally on the user's device and continuously infers their working state from interaction patterns alone. It never reads the contents of code, documents, or messages.
The model uses behavioral signals only: typing dynamics (rhythm, pauses, edits, run-length); mouse motion and clicks (excess movement, returns to the same elements); window and task switching (continuous-session length within one context); and reactions to copilot actions (whether suggestions are used, dismissed instantly, or trigger an abrupt context change).
The model emits a compact state vector for the current user. All values are computed on device and exposed to your agent as a simple, polled or streamed read.
{
"focus_state": "deep", // deep | moderate | fragmented
"workload_state": "normal", // light | normal | intense
"friction_level": "low", // low | high
"interruptibility": "good", // good | neutral | bad
"confidence": 0.83,
"timestamp": "2026-05-05T14:22:09Z"
} Qualia-1 doesn't predict emotions, diagnoses, or long-term wellbeing. It surfaces a small set of working variables your product team can wire directly into agent logic.
We collected a held-out evaluation set built to expose where state-aware models should win: turns where the right answer depends on the user's trajectory, not on the surface text of their reply. Each multi-turn dialogue encodes a controlled latent state — "patience budget" and "aesthetic drift" — through behavioral patterns alone, never named in words. At the killer turn, the model must predict the correct user reaction (accept · modify · reject · disengage) from the trajectory itself.
Suppress popups and heavy dialogs during deep focus. Surface inline help only. In high-friction states, light up the bigger moves — collect context, take over the routine, ship a finished result instead of a chain of small steps.
Under light or normal load, the agent can propose stronger actions — refactors, rewrites, structural moves. Under intense load with high friction, scale back to surgical suggestions with minimal blast radius.
Run experiments on state → intervention mappings. Optimize accept rate, cut copilot disable rate, and reduce the "the agent is in my way" feedback bucket by orders of magnitude — not percentage points.
The model is built around interaction patterns, not the person. Its scope is the behavior of agents inside a working interface — not employee evaluation, not medical diagnosis, not anything that should not exist.
Closed beta is open to a limited number of teams shipping AI copilots and agents. We work directly with each pilot on integration, evaluation, and policy design.
Request beta access →