
Nvidia Corp. announced on Tuesday that its H100 graphics processing unit (GPU) and NeMo Framework are being used in LG AI Research Institute’s latest generative AI model, EXAONE 3.0.
The model, which was released in August, is designed to push the boundaries of AI in both Korean and English language tasks.
According to Nvidia, the NeMo Framework provides an end-to-end solution for building and deploying generative AI models.
This allows users to train large language models (LLMs) quickly, customize them, and deploy solutions at various scales, ultimately reducing the time needed for model development.
Nvidia also highlighted that EXAONE 3.0 leverages its TensorRT-LLM software development kit (SDK), which accelerates and optimizes the inference performance of large language models on AI platforms.
The kit helps improve the efficiency of the latest LLMs, making them more adaptable for diverse applications.
EXAONE 3.0 has demonstrated superior benchmark performance in both Korean and English compared to open-source AI models of a similar scale, such as Meta’s Llama.
The model is available for free use for research purposes.
[ⓒ 매일경제 & mk.co.kr, 무단 전재, 재배포 및 AI학습 이용 금지]