MediaTek Demonstrating On-Device Generative AI Using Llama 2 LLM at MWC 2024

Feb 22, 2024
AI
MWC Llama2

Last August, we announced that we are working closely to leverage Llama 2, Meta’s open-source Large Language Model (LLM), as part of our ongoing investment in creating technology and an ecosystem that enables the future of AI. In particular, we are utilizing our latest APUs and NeuroPilot AI platform, combined with the capabilities of Llama 2, to make it possible for generative AI applications to run directly on-device, rather than exclusively through cloud computing.

On-device (or edge) Generative AI provides several advantages to developers and users, including seamless performance, greater privacy, better security and reliability, lower latency, the ability to work in areas with little to no connectivity, and lower operation cost.

In order for Llama 2 integration to happen on-device, we need capable chipsets that can handle the necessary workload without the assistance of the cloud. The MediaTek Dimensity 9300 and 8300 SoCs, both announced late last year, are fully integrated and optimized to support Llama 2 7B applications.

At Mobile World Congress 2024, MediaTek will demonstrate an optimized Llama 2 Generative AI application on-device using MediaTek’s APU edge hardware acceleration on the Dimensity 9300 and 8300 for the first time. The demo features a tool that generates social media-ready summaries of articles and other longform copy. We invite you to join us at Booth 3D10 in Hall 3 to experience it.

Related Articles

Exec Talk: How AI is impacting MediaTek's approach to design

Apr 4, 2024

MediaTek Research launches the world’s first AI LLM in Traditional Chinese

Apr 28, 2023

MediaTek Research: Improving the speed and reliability of AI model training

Apr 28, 2023
MTK Highlights

Sign up for our monthly newsletter

Executive Insights | Latest News & Events | Products & Technologies