
LG AI Research unveiled Exaone 4.0, positioning it as the backbone of a comprehensive end-to-end AI infrastructure, with a distinct focus on B2B and enterprise applications rather than consumer-facing products, tells this IEEE Spectrum story.
Key features of Exaone 4.0 and ecosystem:
• It’s a hybrid-reasoning model combining language, coding, and multimodal capabilities.
• Offers benchmarked superiority over models from Alibaba, Microsoft, Mistral, and Meta’s Llama 4 Scout in science, math, and code tasks, though it trails Deepseek’s top model.
• Supports English, Korean, and Spanish, with open access via Hugging Face for research and academic use.
At LG’s recent AI Talk 2025 event, LG detailed the Exaone ecosystem, including:
• Exaone 4.0 VL, a vision-language multimodal model soon to launch.
• Exaone Path 2.0, focused on healthcare diagnostics.
• Enterprise agents such as ChatExaone (internal workflow agent), Exaone Data Foundry (rapid synthetic data generation), and a secure on-prem agent for sensitive-use cases.
Hardware acceleration and efficiency:
• Built on FuriosaAI’s RNGD neural processing units (NPUs), delivering 2.25× faster inference than conventional GPUs.
• Consumes less power, enabling a single NPU rack to produce up to 3.75× more token output per watt versus GPU-based racks.
LG’s vision extends to enabling autonomous, secure enterprise agents fully hosted on private infrastructure, complete with in-house synthetic data generation and integrated business operation interfaces.
LG is building not just another LLM, but a full-stack, enterprise-class AI platform that integrates advanced models, local deployment, data synthesis, and hardware customization, marking a notable shift in AI infrastructure strategy.