Synstate Labs is an independent research and engineering lab focused on device-level modeling of cognitive state for AI systems.
Our core technology, the Cognitive State Engine, runs on personal devices and turns continuous interaction signals into a live model of user state that applications and AI agents can use to adapt their behavior.
A privacy-first, on-device engine that fuses interaction signals into a compact set of state variables, enabling adaptive timing, pacing, and interaction intensity across applications and AI agents.
Typing, mouse, gaze, context switching, micro-pauses — fused into a unified state model.
Focus, workload, fragmentation, fatigue trend — updated continuously in real time.
Raw content stays local. State extraction happens entirely on the user's device.
A compact API for apps and AI agents to adapt timing, pacing, and interaction intensity.
System-level live demo of real-time state modeling: from interaction signals to live state variables.
Application-level demo to test state-aware interaction patterns such as pacing, simplification, and break timing.
See how Synstate's Cognitive State Engine tracks user state in real time — from interaction signals to live state variables, in a desktop console and interaction playground.
We offer a guided live demo session where we walk through the Synstate Live Console and the Focus Playground, discuss your use cases, and answer technical questions about integration and roadmap.
Synstate doesn't create understanding. It reveals what was already there in the signal.
Current AI sees each interaction as a fresh start. Synstate tracks patterns over days — detecting the slow drift from baseline to burnout before it becomes a crisis.
Normal workload, regular breaks, stable patterns
Context switching rises, micro-pause frequency drops
3 consecutive days of irregular patterns detected
Synstate alert → manager notified → schedule review
Adjusted schedule, resilience stabilizing
Modeling user state from everyday human–computer interaction: keyboard and mouse dynamics, gaze and micro-pauses, context switching and session structure.
Inferring attention, fragmentation and workload over time, rather than momentary "focus scores", with an emphasis on stability and interpretability.
Designing APIs and protocols that let agents and applications adapt timing, pacing and intensity of interaction without creating new failure modes.
Methods for extracting useful state while keeping raw content local, minimizing identifiability and misuse risks.
Cognitive State Engine prototype — on-device engine that fuses typing, mouse, gaze and context signals into a small set of state variables (focus, workload, fragmentation, fatigue trend).
Live console and playground demos — internal tools to visualize state in real time and test how applications can adapt navigation, pacing and breaks.
Early design-partner studies — qualitative and quantitative studies with early partners to evaluate behavioral change and perceived overload reduction in state-aware systems.
We are currently exploring questions such as:
How stable and transferable are state representations across tasks and devices?
How should agents negotiate with each other when they all want user attention?
What is the minimal state interface that is still useful for real applications?
How do we measure long-term impact on cognitive overload and burnout risk, not just short-term engagement?
We collaborate with research groups and product teams interested in human-state modeling for AI systems.
Get in touch about research collaboration