English

    MediaTek Dimensity Platforms will Support the new Llama 3.2 Generative AI LLMs

    Today, Meta introduced Llama 3.2, the latest generation of its open-source Large Language Model (LLM) for AI applications. This version will bring new capabilities to the ecosystem, including multimodal support for reasoning on high resolution images with 11B and 90B models, as well as support for lightweight 1B and 3B models that allow Llama to run efficiently almost anywhere including on mobile and other edge devices.

    Continuing to expand the framework MediaTek has established for Llama 2 and 3 models, we are once again working to leverage a combination of NPU Hardware Acceleration and software tools to officially support Llama 3.2 on Dimensity platforms, including the upcoming Dimensity 9400 and other Generative AI-enabled platforms.

    The Llama 3.2 release includes several new, updated and highly differentiated models across a spectrum of sizes and capabilities, as well as robust system level safety support including image input guardrails. The 1B, 3B and 11B models support on-device use cases like knowledge retrieval and summarization, instruction following, and rewriting tasks running locally at the edge. The ability for MediaTek to run Llama 3.2 on-device provides many benefits for developers and users, including faster response times, lower latency, and reduced power consumption. Smaller Llama models facilitate on-device Generative AI solutions through low memory usage for a smoother user experience.

    Developers will be able to utilize Llama 3.2 through MediaTek’s NeuroPilot SDK, a toolkit that enables and optimizes on-device Gen-AI inference capabilities across our diverse product portfolio, including mobile platforms and edge-AI capable devices.

    For more information about Meta Llama 3.2 and what it brings to developers leveraging the MediaTek Dimensity platform, please visit: