AIQezsnYmvqnwTj0YiBWJ3qMosGdbEJBetfjV8gm
Bookmark

LG Unveils Korea's First Reasoning AI Model: 'ExaOne Deep'

On March 18, LG unveiled South Korea’s inaugural reasoning-based artificial intelligence (AI) model. This advanced form of AI differs significantly from conventional systems as it derives responses via systematic, logical thought processes akin to human cognition rather than relying solely on pre-existing datasets. An exemplary case of this technology can be seen with China’s DeepSeek, which has garnered international recognition due to its efficient performance capabilities. With major players such as OpenAI and DeepSeek vying against each other in the realm of reasoning AI innovation, LG’s latest offering places the country firmly within this competitive landscape. Nonetheless, LG plans to restrict access to their cutting-edge model, reserving it exclusively for internal applications aimed at enhancing future products.

LG AI Research has introduced Exaone Deep, led by its primary model, Exaone Deep-32B. This model boasts an impressive 32 billion parameters designed to enhance data connectivity for AI training and inference. Typically, a higher number of parameters correlates with superior performance; however, this comes at the cost of requiring additional AI processors. Consequently, businesses are now concentrating more on achieving efficient outcomes using reduced parameter counts.

DeepSeek’s R1 boasts an impressive 671 billion parameters, whereas ExaOne’s Deep-32B possesses approximately just 5% of those. Surprisingly, despite having significantly fewer parameters, testing reveals that LG’s model matches the capabilities of DeepSeek-R1. When compared against top-tier reasoning AIs such as DeepSeek and Alibaba’s QwQ-32B using benchmarks, ExaOne’s Deep-32B stood out notably in mathematical tasks. During the 2024 U.S. Mathematical Olympiad, it earned a score of 90, surpassing both DeepSeek-R1 (which scored 86.7) and Alibaba’s QwQ-32B (also at 86.7). Additionally, during the South Korean 2025 CSAT math portion, it attained a high score of 94.5, ranking first amongst competitors. Furthermore, when tackling advanced scientific questions typically found in doctorate programs, it secured a commendable score of 66.1, placing above Alibaba’s QwQ-32B which garnered a slightly lower score of 63.3.

Nonetheless, it lagged in coding and linguistic abilities. During the Massive Multitask Language Understanding (MMLU) assessment, it achieved a score of 83, falling short of Alibaba’s (87.4) and DeepSeek’s (90.8) performances. A specialist from the field commented, "AI reasoning models focus primarily on tackling mathematical and scientific challenges; hence, their proficiency in language tends to be inferior compared to more extensive models."

LG AI Research unveiled two new models: the lightweight Exaone Deep-7.8B and the on-device Exaone Deep-2.4B. According to the institute, "The lighter variant, which is merely 24% the size of the 32B model, maintains 95% of its efficiency, whereas the on-device model offers 86% efficacy with just 7.5% of the original scale." In line with their approach for DeepSeek, LG has made the source code for these models available as an open resource for developers.

LG provides the source code gratis, yet the AI model is reserved solely for internal usage. Making it available publicly, similar to ChatGPT, would necessitate extensive data centers and entail costs of at least several trillion won.

In South Korea, Naver is among the companies venturing into AI model development. They introduced HyperCLOVA X in 2023 and subsequently reduced its parameters by approximately 60%, which boosted its reasoning capabilities. According to Naver, this optimization led to over a 50% reduction in operational expenses for the new model. Additionally, they are working on an exclusive reasoning-focused AI model. Similarly, prominent AI start-up Upstage has commenced comprehensive development efforts focused on reasoning AI as well.

Post a Comment

Post a Comment