Optics that disappear into the wearer.
Waveguide architectures targeting 30° field of view, sub-8-gram per eye, 50% optical efficiency. Designed for daylight, not the lab bench. We don't ship a heads-up display — we ship a window.
The screen is a forty-year-old idea. We are building the optics, the on-device intelligence, and the operating layer for what replaces it.
We are willing to put a number on it. The medium of computing is changing for the first time since 1984 — and the companies that try to keep building around the rectangle are quietly running out of runway.
Each dominant medium has lasted roughly four decades. The smartphone era, dated from 2007, follows the same curve — its successor takes over around 2032. That successor will not be a rectangle. It cannot be.
An agent that reasons about your environment, your gaze, and the surface you are touching cannot be confined to a tab. The output medium has to expand to match the model.
Waveguide tolerances, MicroLED yields, and freeform optical tooling are no longer the bottleneck. We can specify them — and the supply chain ships them.
The unit of software is no longer the application. It is the prompt, the agent, the gesture, the gaze. A new system layer is required to host the new grammar.
Optics, intelligence, and system. We refuse to specialize. Spatial computing is a hardware-software whole — the hand-offs between the three layers are where every previous attempt has failed.
Waveguide architectures targeting 30° field of view, sub-8-gram per eye, 50% optical efficiency. Designed for daylight, not the lab bench. We don't ship a heads-up display — we ship a window.
Multimodal models running locally with sub-50ms response and 8-billion-parameter ceiling. The agent perceives what the wearer perceives — without a round-trip to a server, without surrendering the wearer's privacy.
Spatial primitives instead of windows. Intent instead of apps. 60Hz pose tracking. An open developer SDK. The operating system for when the screen disappears — and the world becomes the canvas.
The trade-off no one talks about: every AR product so far has optimized two axes and abandoned the third. We are designing for all three simultaneously — knowing the constraint is brutal.
Information that arrives when you need it. Tools that materialize at the point of attention. A world annotated with intelligence — quietly, without taking it over. The wearer stays the protagonist; the device becomes the camera operator.
Concept rendering. Not a product. The vector of where we are going.
Spatial computing is a decade-long endeavour. We are honest about where we are. We refuse to overclaim — and we refuse to be quiet about what we will ship.
R&D on freeform optical engines targeting 30° FOV at sub-8 g per eye. On-device multimodal runtime in Rust. First foundational patents filed. Four research hires.
Wearable optical engine v0. Internal pilot with the founding team. Closed alpha with three industry partners under NDA. First public technical paper.
Field deployment across three verticals: procedural medicine, field engineering, warehouse routing. Co-development with anchor partners. Spatial SDK preview.
Consumer-grade optical engine. Public developer platform with the full spatial primitive SDK. Open ecosystem with first-party reference applications.
Lumira is a spatial computing company founded in 2026, headquartered in Fuqing, Fujian, China. We are early. We are deliberate. We refuse to ship vapor — and we refuse to shrink the ambition to fit a shorter timeline.
Mission
“To translate light into intelligence — and intelligence into a new way of seeing.”
We work with optical engineers, AI researchers, hardware partners, and forward-leaning enterprises. If you are building at the edge of vision, intelligence, or interface — write to us.
Direct line
founders@lumirahq.comWe respond within 48 hours.