Qualia · Synstate Labs
v1.0 · Closed Beta Karlsruhe / DE Request Access →
Now in closed beta · 05.2026

Qualia‑1.

An on-device model of the user's working state, built for AI copilots and agentic systems. It reads interaction patterns and outputs a compact state vector that agents can act on in real time.

17.6M parameters · Runs on device · Behavioral signals only
STATE VECTOR Live · Demo
focus_state moderate
workload_state normal
friction_level low
interruptibility good
t = ··:··:·· policy_hint = PROACTIVE_OK
01 / Problem

AI sees the data. It doesn't see the human.

Modern copilots already read your code, your docs, your tickets, your history. They have memory, tools, and the ability to act on your behalf. What they can't see is you.

Today's agents behave the same way whether the user is in deep, sustained focus or compulsively switching tabs. They don't notice the moment someone gets stuck — the loops, undos, repeated attempts in the same place. They have no signal for whether now is a good moment to step in or a terrible one.

The result shows up in the metrics. Suggestion accept rates plateau. Some users disable the copilot entirely. Others ignore most of what it says. Product teams are afraid to make agents more proactive because they know it will hurt the experience for the wrong users at the wrong time.

a.

Copilots fire the same way at a focused user and a fragmented one.

b.

No signal for stuck states — the loops, the undos, the dead-end revisits.

c.

The "agency" conversation is about architecture and tools, not about the human running the system.

02 / Closed Beta

For teams shipping copilots and agents.

We're opening private access to Qualia-1 for a small number of teams already running — or actively building — AI copilots and agentic systems inside IDEs and AI workspaces.

If you want to test how working-state signals change accept rate, dwell time, and user retention in your product, request a pilot below.

03 / Meet Qualia-1

A small model that watches how, not what.

Qualia-1 runs locally on the user's device and continuously infers their working state from interaction patterns alone. It never reads the contents of code, documents, or messages.

The model uses behavioral signals only: typing dynamics (rhythm, pauses, edits, run-length); mouse motion and clicks (excess movement, returns to the same elements); window and task switching (continuous-session length within one context); and reactions to copilot actions (whether suggestions are used, dismissed instantly, or trigger an abrupt context change).

FIG. 01 · DATA FLOW
INPUT · BEHAVIORAL SIGNALS
  • typing dynamics
  • mouse motion
  • window switches
  • copilot reactions
Qualia-1
17.6M params · on device
OUTPUT · STATE VECTOR
  • focus_state
  • workload_state
  • friction_level
  • interruptibility
04 / Output

Four variables. Real time. Through a simple API.

The model emits a compact state vector for the current user. All values are computed on device and exposed to your agent as a simple, polled or streamed read.

API · GET /qualia/state application/json
{
  "focus_state":      "deep",        // deep | moderate | fragmented
  "workload_state":   "normal",      // light | normal | intense
  "friction_level":   "low",         // low | high
  "interruptibility": "good",        // good | neutral | bad
  "confidence":       0.83,
  "timestamp":        "2026-05-05T14:22:09Z"
}
05 / State Variables

Working variables, not emotional labels.

Qualia-1 doesn't predict emotions, diagnoses, or long-term wellbeing. It surfaces a small set of working variables your product team can wire directly into agent logic.

Focus focus_state
deep — sustained, uninterrupted focus.
moderate — engaged with mild context drift.
fragmented — frequent short context switches.
Signal sources: typing run-length, switch frequency, dwell time per context.
Workload workload_state
light — low action density, short sessions.
normal — typical working pace.
intense — high density, long sessions, many switches.
Signal sources: action rate, session length, pause distribution.
Friction friction_level
low — actions land linearly, few backtracks.
high — undos, repeats, retries in the same UI region.
High friction is the strongest signal that the user could use help now — and may be the difference between a useful suggestion and an annoying one.
Interruptibility interruptibility
good — proactive actions welcome.
neutral — proceed at default cadence.
bad — minimize or defer non-essential actions.
Trained on the user's history of reactions to past agent interventions: were suggestions used, dismissed instantly, or did they trigger an abrupt context switch?
06 / Results

A specialist beats a frontier generalist on its own benchmark.

We collected a held-out evaluation set built to expose where state-aware models should win: turns where the right answer depends on the user's trajectory, not on the surface text of their reply. Each multi-turn dialogue encodes a controlled latent state — "patience budget" and "aesthetic drift" — through behavioral patterns alone, never named in words. At the killer turn, the model must predict the correct user reaction (accept · modify · reject · disengage) from the trajectory itself.

Aesthetic-Drift Killer Set · n = 10 seeds × template
Predict the correct reaction at turn N.
Qualia-1 · 17.6M params · on-device 80%
Gemini 3.1 Pro · zero-shot · frontier baseline 40%
The frontier LLM misses the drift-driven signal in half the cases. The user reply at the killer turn is often identical in surface form to a "high-patience accept" — only the trajectory tells you it's a disengage.
~10,000×
Smaller than the frontier baseline · runs locally with no cloud round-trip
93 / 85
Accuracy on the full killer set · Qualia-1 vs. zero-shot Gemini 3.1 Pro (%)
07 / Applications

What your agent can do with Qualia-1.

A.

Time interventions to the moment.

Suppress popups and heavy dialogs during deep focus. Surface inline help only. In high-friction states, light up the bigger moves — collect context, take over the routine, ship a finished result instead of a chain of small steps.

B.

Match action weight to load.

Under light or normal load, the agent can propose stronger actions — refactors, rewrites, structural moves. Under intense load with high friction, scale back to surgical suggestions with minimal blast radius.

C.

Test policies that didn't exist before.

Run experiments on state → intervention mappings. Optimize accept rate, cut copilot disable rate, and reduce the "the agent is in my way" feedback bucket by orders of magnitude — not percentage points.

08 / Privacy & Scope

By design, not by promise.

The model is built around interaction patterns, not the person. Its scope is the behavior of agents inside a working interface — not employee evaluation, not medical diagnosis, not anything that should not exist.

×Does not read the contents of code, documents, email, or chat.
×Does not transmit raw interaction logs off the device.
×Does not use camera, microphone, or biometric sensors.
×Does not predict emotions, traits, mental health, or productivity scores.
Runs entirely on device. Outputs only the four variables above.
Designed under EU AI Act requirements for workplace systems.

Make your agent see the human.

Closed beta is open to a limited number of teams shipping AI copilots and agents. We work directly with each pilot on integration, evaluation, and policy design.

Request beta access