LG introduces Exaone Deep as open source at Nvidia GTC 2025

South Korean firm joins global AI race with its own reasoning model, aiming to rival leading platforms from the US and China

LG introduces Exaone Deep as open source at Nvidia GTC 2025
LG introduces Exaone Deep as open source at Nvidia GTC 2025

LG Artificial Intelligence (AI) Research has made its Exaone Deep reasoning model available as open source, unveiling it at the Nvidia GPU Technology Conference (GTC) in San Jose, California, which runs through Friday. This move places LG among the few non-U.S. and non-Chinese entities entering the increasingly competitive space of agentic AI.

Reasoning capabilities at the core of Exaone Deep

In artificial intelligence, reasoning refers to the logical process of applying knowledge to solve problems, form conclusions, and generate predictions. Unlike standard AI applications, which often rely on pattern recognition, reasoning models like Exaone Deep are designed to draw inferences and handle complex tasks — a necessary component for achieving what experts describe as "agentic AI."

While companies such as OpenAI, Google, DeepSeek, and Alibaba are actively building foundation models with reasoning capabilities, LG's Exaone Deep stands out as the first reasoning AI model of its kind developed in South Korea.

Model performance measured against global benchmarks

According to performance data published on Hugging Face, the Exaone Deep-32B model demonstrated comparable outcomes in problem-solving and logical reasoning when evaluated against much larger models. Although it contains just 32 billion parameters — approximately 5 percent of DeepSeek’s R1 model, which includes 671 billion — Exaone Deep held its ground in several global benchmarks.

The model scored 94.5 on the mathematics section of Korea’s College Scholastic Ability Test, achieving the highest grade, and also received 95.7 points on MATH-500, a benchmark for mathematical reasoning. In scientific reasoning, Exaone Deep scored 66.1 in the Google-Proof Q&A Benchmark for graduate-level questions across physics, chemistry, and biology. In terms of programming skills, the model earned 59.5 points on LiveCodeBench, showing a broad range of capabilities.

Smaller variants and real-world applications

Along with the flagship 32B version, LG AI Research released two smaller models: Exaone Deep-7.8B, which achieves 95 percent of the 32B model’s performance at only 24 percent of its size, and Exaone Deep-2.4B, an on-device model that retains 86 percent of the original’s performance while occupying just 7.5 percent of the memory footprint.

An LG AI Research official said, “Just a month after announcing that we would release an AI model whose performance is on par with DeepSeek R1, we now proudly present Exaone Deep.” The lab emphasized that “the core of LG's AI technology is maintaining performance while significantly reducing model size.”

A strategic step in LG's AI roadmap

Developed as a base model, Exaone supports various AI applications across the LG Group, including LG Uplus’ ixi-series and upcoming LG Electronics devices. The open-sourcing of Exaone Deep aligns with LG Group Chairman Koo Kwang-mo’s broader strategy to integrate AI more seamlessly into everyday life.

In his New Year speech, Koo stated that the group will “create a new lifestyle where people use cutting-edge technologies such as AI conveniently in their daily lives, allowing them to spend their precious time on more enjoyable and meaningful tasks.”

The model has been included in the Notable AI Models list curated by Epoch AI, a U.S.-based nonprofit AI research group. LG AI Research believes Exaone Deep has strong potential for use in scientific and professional fields, especially in environments where model size and performance must be balanced effectively.