Home 9 AI 9 “Double Duty” for FPGAs

“Double Duty” for FPGAs

by | Sep 17, 2025

Cornell’s chip-architecture tweak cuts space and power for AI workloads.
Source: AI.

Cornell researchers have developed a new Field-Programmable Gate Array (FPGA) architecture that significantly reduces energy consumption in AI computations by optimizing the utilization of certain chip components. As AI models become more powerful, their energy demands, especially in data centers, are rising, pushing for more efficient hardware, tells Cornell Chronicle.

The team focused on FPGAs, which are reprogrammable chips used in cloud infrastructure, telecommunications, and increasingly, AI. In standard FPGA logic blocks, there are lookup tables (LUTs) that manage logic operations, and adder chains that do arithmetic (addition, essential for neural networks). Normally, adder chains are accessed only indirectly through LUTs. This architecture works, but it wastes potential when arithmetic operations dominate.

To address this, the researchers designed what they call “Double Duty,” a change that lets LUTs and adder chains work independently and in parallel in the same logic block. So both logic and arithmetic units can be used at once, rather than forcing them to wait or route through LUTs. For “unrolled” neural network circuits, i.e., those mapped directly onto FPGA logic for speed, this design makes a big difference.

In tests, Double Duty reduced the chip space required for certain AI tasks by more than 20%, and improved overall performance on a suite of benchmark circuits by nearly 10%. That means fewer chips or less hardware is needed to achieve similar results, which cuts energy use.

This isn’t only useful for AI. Because many applications (chip verification, wireless communications, etc.) rely heavily on arithmetic, the same architectural idea can improve efficiency across those domains. The project was a multi-institution collaboration, involving Cornell Tech, Cornell Engineering, and industry partners.