Olix will ship its first photonic computing product in 2027, the startup confirmed, positioning itself in a global race for specialized AI inference hardware.1
Photonic computing uses light instead of electricity to process data, promising lower power consumption and higher speeds. Companies worldwide are developing these alternatives to traditional silicon-based AI accelerators as demand for efficient inference chips grows.
The AI chip landscape is fragmenting internationally. While hyperscalers in the US and China pursue general-purpose training processors, inference deployment requires different architectures optimized for cost and power efficiency. Specialized accelerators like Tranium chips and Language Processing Units target these inference tasks—running trained AI models rather than training them.
Advanced packaging has become a critical global bottleneck. High Bandwidth Memory integration and chiplet architectures require sophisticated manufacturing capabilities concentrated in a handful of facilities. Micron and other international memory manufacturers are expanding HBM production to meet surging demand.
Nvidia projects $1 trillion in chip sales through 2027, reflecting massive worldwide infrastructure buildout. But market dynamics are shifting: training happens once, while inference runs millions or billions of times across global data centers and edge devices.
POET Technologies and other photonic computing firms across North America and Asia are pursuing similar approaches. The technology faces engineering challenges integrating optical components with electronic systems and manufacturing at commercial scale. Most photonic chips remain in development or limited production.
Olix's 2027 target suggests the company expects to complete product development and secure manufacturing partnerships within two years. Production photonic chips typically require partnerships with semiconductor foundries possessing both optical and electronic fabrication capabilities—a specialized capacity concentrated in select facilities worldwide.
The bet on specialized AI accelerators assumes workload-specific chips will outperform general-purpose processors for inference as AI deployment scales globally.
Sources:
1 Crunchbase News, February 1, 2026


