
In an article published by IEEE Spectrum, the team at Imec reports achieving over 400 gigabits per second (Gb/s) per lane in a 300 mm silicon photonics wafer platform. Previously, industry-standard optical links in data centers were around 100 Gb/s per lane and trending toward 200 Gb/s. But the explosion in AI-driven compute clusters and the resulting data-traffic demand has pushed optical interconnects into a new regime where 400 Gb/s per lane is now targeted.
A key technical milestone: Imec’s silicon-based electro-absorption modulator (EAM) reached 448 Gb/s per lane, a first in silicon photonics according to the article. This is significant because earlier skeptics believed silicon photonics had hit a ceiling in speed and energy efficiency for ultra-high-bandwidth interconnects; alternative platforms such as indium phosphide (InP), thin-film lithium niobate, or barium titanate were considered necessary. However, Imec argues that silicon still has headroom even for the most demanding links.
Beyond raw data rate, key advantages lie in manufacturability and cost: using a 300 mm silicon wafer platform means leveraging the semiconductor supply chain. Imec also emphasizes that the achievement aligns with short-reach, scale-up optical interconnects for AI clusters, where low latency, high bandwidth, and energy efficiency are critical. While challenges remain, such as integration of lasers, packaging losses, thermal management, and system-level implementations, the demonstrated milestone sends a strong signal that silicon photonics is evolving from niche to mainstream for next-gen data-center interconnects.
For engineers working in high-performance computing, data-center design, or optical hardware, this means a possible shift: the next performance frontier isn’t just electronics or packaging but the optical link itself, and silicon photonics is entering that frontier with credible momentum.