Lumira — Spatial Intelligence

Seeing tomorrowbefore it arrives.

The screen is a forty-year-old idea. We are building the optics, the on-device intelligence, and the operating layer for what replaces it.

Scroll
v 0.1 · R&D
01 / Timing

The screen haseight years left.

We are willing to put a number on it. The medium of computing is changing for the first time since 1984 — and the companies that try to keep building around the rectangle are quietly running out of runway.

Fig · The forty-year cycle of computing media (1950 — 2040)≈ 40-year cycles
1950
1970
1990
2010
2030
MainframeCentralized1950–1985
Personal ComputerDesk-bound1980–2015
SmartphonePocketed2007–2032
Spatial ComputingEmbodied2026–
TODAY · 2026
← 8 yrs

Each dominant medium has lasted roughly four decades. The smartphone era, dated from 2007, follows the same curve — its successor takes over around 2032. That successor will not be a rectangle. It cannot be.

  • 01

    AI's interface cannot live inside a window.

    An agent that reasons about your environment, your gaze, and the surface you are touching cannot be confined to a tab. The output medium has to expand to match the model.

  • 02

    Optics, finally, is consumer-grade.

    Waveguide tolerances, MicroLED yields, and freeform optical tooling are no longer the bottleneck. We can specify them — and the supply chain ships them.

  • 03

    Apps dissolve once intent is addressable.

    The unit of software is no longer the application. It is the prompt, the agent, the gesture, the gaze. A new system layer is required to host the new grammar.

02 / Directions

Three layers. Shipped together, or not at all.

Optics, intelligence, and system. We refuse to specialize. Spatial computing is a hardware-software whole — the hand-offs between the three layers are where every previous attempt has failed.

01 / Optics

Optics that disappear into the wearer.

Waveguide architectures targeting 30° field of view, sub-8-gram per eye, 50% optical efficiency. Designed for daylight, not the lab bench. We don't ship a heads-up display — we ship a window.

30° FOV<8 g / eye50% efficiency
02 / Intelligence

Inference that lives on the device.

Multimodal models running locally with sub-50ms response and 8-billion-parameter ceiling. The agent perceives what the wearer perceives — without a round-trip to a server, without surrendering the wearer's privacy.

<50ms latency8B paramsOn-device
03 / System

An OS where the interface is the environment.

Spatial primitives instead of windows. Intent instead of apps. 60Hz pose tracking. An open developer SDK. The operating system for when the screen disappears — and the world becomes the canvas.

60Hz trackingSpatial SDKOpen API
Design space

The trade-off no one talks about: every AR product so far has optimized two axes and abandoned the third. We are designing for all three simultaneously — knowing the constraint is brutal.

03 / Concept

Imagine —the next decade of human vision.

Information that arrives when you need it. Tools that materialize at the point of attention. A world annotated with intelligence — quietly, without taking it over. The wearer stays the protagonist; the device becomes the camera operator.

Concept rendering. Not a product. The vector of where we are going.

LUM · CONCEPT-001
ø 1024 · 2026
FRAME 01 / 04
AR · OVERLAY · ALPHA
Engineering reference · optical module exploded view
  • Waveguide · freeform substrate
  • MicroLED projector · 0.13 cc
  • On-device NPU · 12 TOPS
  • Eye-tracking IR · stereo
04 / Roadmap

A patient trajectory.

Spatial computing is a decade-long endeavour. We are honest about where we are. We refuse to overclaim — and we refuse to be quiet about what we will ship.

Fig · Overlapping phases (2026 — 2029+)Next phase starts when prev ≥ 60% complete
  • 01 / 2026 — H2

    Research

    R&D on freeform optical engines targeting 30° FOV at sub-8 g per eye. On-device multimodal runtime in Rust. First foundational patents filed. Four research hires.

  • 02 / 2027 — H1

    Prototype

    Wearable optical engine v0. Internal pilot with the founding team. Closed alpha with three industry partners under NDA. First public technical paper.

  • 03 / 2027 H2 — 2028 H1

    Partner Pilot

    Field deployment across three verticals: procedural medicine, field engineering, warehouse routing. Co-development with anchor partners. Spatial SDK preview.

  • 04 / 2028 — onward

    General Availability

    Consumer-grade optical engine. Public developer platform with the full spatial primitive SDK. Open ecosystem with first-party reference applications.

05 / About

Quiet ambition,long horizon.

Lumira is a spatial computing company founded in 2026, headquartered in Fuqing, Fujian, China. We are early. We are deliberate. We refuse to ship vapor — and we refuse to shrink the ambition to fit a shorter timeline.

Mission

To translate light into intelligence — and intelligence into a new way of seeing.

Company facts公示信息
Legal name
Fujian Guangzhi Future Technology Co., Ltd.
Founded
May 12, 2026
Registered capital
¥10,000,000 RMB
Domain
Photonics × AI × Spatial OS
Stage
Research & Development
Registration
91350181MAKC6QYM4U
Headquarters
Fuqing, Fujian, China
06 / Contact

Tell us whatyou are building.

We work with optical engineers, AI researchers, hardware partners, and forward-leaning enterprises. If you are building at the edge of vision, intelligence, or interface — write to us.

We respond within 48 hours.