
For the past three years, artificial intelligence has been synonymous with language. Large language models (LLMs) dazzled the public with their fluency, their ability to summarize documents, write code, and converse convincingly. But as impressive as these systems are, engineers have quietly reached a different conclusion: language alone does not build bridges, design factories, land spacecraft, or keep robots from colliding with humans.
At CES 2026, two CEOs from very different corners of the technology industry delivered a remarkably aligned message. Roland Busch of Siemens and Lisa Su of AMD both expressed that the most consequential form of artificial intelligence for engineers will not live in text prompts or chat windows. It will live in physics-aware, spatially grounded, real-time systems—what both leaders explicitly described as Physical AI.
Physical AI is not a marketing slogan. It is an architectural shift. It is AI that understands mass, inertia, torque, pressure, friction, collision, heat, and flow. It is AI that reasons in three dimensions, operates under deterministic constraints, and must act the first time correctly—because in the physical world, there is no “retry” button.
And at CES 2026, the case for Physical AI moved decisively from theory to practice.
Beyond Words: Why Engineers Need More Than Language Models
Lisa Su addressed the limitations of language-centric AI head-on in her CES keynote. While acknowledging the explosion of generative AI, she emphasized that human intelligence is not built solely on language.
“There’s a lot more than just language intelligence,” Su said. “Even for us humans, there’s more than passively looking at life and the world. We are incredibly spatially intelligent animals…connecting perception to actions.”
That distinction matters deeply to engineers. A large language model can describe how a robot should walk. It cannot, on its own, balance that robot, account for the center of gravity, or adjust torque in real time when the robot slips. Physical AI must integrate perception, decision-making, and actuation—continuously and deterministically.
Su framed Physical AI as the convergence of spatial intelligence, physics-aware reasoning, and real-time compute, enabling machines to perceive and act in the world rather than merely describe it.
“It’s the ability not only to perceive, but create 3D or even 4D worlds [the 4th dimension is time], reason about objects and people and imagine entirely new environments that still obey the laws of physics.”
For engineers, that is the difference between AI as a glorified and sometimes unreliable search tool and AI as an engineering system.
Siemens’ View: AI Only Matters When It Hits the Real World
Roland Busch articulated Siemens’ position, declaring in a pre-CES press briefing that Siemens sees little engineering value in AI that does not connect directly to physical systems.
“We share the same idea that the real impact from AI comes when it hits the real world,” Busch said. “This is where the trillion-dollar market is.”
For Siemens, Physical AI is inseparable from digital twins—high-fidelity, physics-based virtual representations of products, factories, infrastructure, and even entire energy systems. These twins are not static models. They are continuously synchronized with real-world sensor data, forming a closed feedback loop between simulation and reality.
Busch described how Siemens is rewriting its simulation and EDA software stacks to run on accelerated compute, enabling orders-of-magnitude faster validation of physical designs.
“If you simulate a crash, it can take half a day,” he said. “But now we bring these algorithms from CPUs to GPUs, which allows you to simulate much more in a very short time.”
The implication is profound: Physical AI does not just analyze designs faster—it enables AI-native design, where the system actively proposes new physical configurations that engineers may never have considered.
The Factory as an AI System
Perhaps the clearest expression of Siemens’ Physical AI strategy is its vision for autonomous manufacturing.
Busch described what Siemens calls the “first AI-driven, autonomous manufacturing site”—a facility where AI systems do not merely monitor operations but actively adjust parameters, optimize yield, and prevent defects before they occur.
“It’s a manufacturing site that not only analyzes what’s going wrong, but really acts on your behalf,” Busch explained.
This requires far more than a language model. The AI must ingest time-series sensor data, visual inspection feeds, environmental conditions, and control signals—then make deterministic decisions that affect physical machinery.
Crucially, Siemens is training these systems inside photo-realistic digital twins, using physics-accurate simulation environments. As Busch noted, training robots or automation systems in a visually and physically accurate digital world dramatically improves their performance in the real world.
“If you train a model in the digital world, it’s much better if it’s photorealistic,” he said. “You can train hundreds and thousands of cases over a weekend.”
This is Physical AI as an industrial operating system—not a chatbot.
AMD’s Perspective: Physical AI Is the Hardest Problem in Computing
If Siemens represents the application layer of Physical AI, AMD represents the compute substrate that enables it.
Lisa Su repeatedly emphasized that Physical AI is fundamentally more difficult than language AI.
“Physical AI is one of the toughest challenges in technology,” Su said. “It requires building machines that seamlessly integrate multiple types of processing to understand their environment, make real-time decisions, and take precise action—with no margin for error.”
Language models tolerate latency. Physical systems do not. A humanoid robot cannot wait for a cloud response to decide whether it is about to fall. A spacecraft cannot pause mid-descent while an LLM refines its answer.
That is why AMD positions Physical AI as a full-stack problem, spanning CPUs for deterministic control, GPUs and accelerators for perception and simulation, and open software ecosystems that allow developers to move seamlessly between edge and data-center environments.
“Delivering that kind of intelligence takes a full-stack approach,” Su said.
Humanoid Robots, Biomechanics, and Touch
One of the most vivid demonstrations of Physical AI in Su’s keynote came from humanoid robotics.
In conversation with Generative Bionics’ leadership, Su highlighted how physical intelligence emerges from biomechanics, reflexes, and touch—not from text.
“If an artificial agent needs to understand the human world, doesn’t it need a human-like body to experience it?” the robotics team asked.
Their robots rely on touch as a primary source of intelligence, enabling safe human-robot collaboration in factories and healthcare. That sensory data feeds directly into real-time control loops running on AMD hardware—far removed from the probabilistic outputs of language models.
“Touch cannot wait for the cloud,” the team noted.
This is Physical AI operating at human time scales, with human consequences.
Space: The Ultimate Edge Case for Physical AI
Nowhere is the distinction between language AI and Physical AI clearer than in space.
In her CES keynote, Su described how AMD technology is powering autonomous exploration on Mars, lunar missions, and deep-space systems—environments where latency, radiation, mass, and power constraints are unforgiving.
“Space is the ultimate edge environment,” said Blue Origin’s John Polansky.
AI systems used in space must reason about physical terrain, landing dynamics, and hazards in real time. They cannot rely on language inference alone.
“AI becomes a copilot—identifying landing sites, looking out for hazards,” Polansky explained.
This is Physical AI as survival technology.
Digital Worlds That Obey Physics
Another striking example from AMD’s keynote was the rise of spatially generative world models—AI systems that can reconstruct and simulate entire environments from minimal visual input.
These systems generate navigable, persistent 3D worlds that obey physical laws, enabling robotics training, factory simulation, architectural design, and more.
“Once these worlds exist, they feel alive,” Su said. “They react instantly as users or agents move, explore, interact, and create.”
Unlike text-based AI, these models encode geometry, scale, and motion—core requirements for engineering workflows.
The Engineering Inflection Point
Taken together, the messages from Roland Busch and Lisa Su converge on a single conclusion: AI’s center of gravity is shifting from language to physics.
Language models will remain indispensable for documentation, coding, and communication. But the AI systems that reshape engineering, manufacturing, energy, transportation, and space will be Physical AI systems—grounded in 3D space, governed by physics, and executed in real time.
Busch summarized Siemens’ view succinctly:
“We are bringing AI into the physical world…because this is where the biggest benefits are.”
And Su framed the transition as the opening of a new chapter:
“We’re moving from systems that understand words and images passively to systems that help us interact with the world.”
For engineers, that shift is not incremental. It is foundational. Physical AI is not just the next feature—it is the next platform.