Meta Releases Quantized Versions of Llama 3.2 1B and 3B Models: Reducing Power Consumption and Enhancing Mobile Application Potential

In the rapidly evolving world of artificial intelligence, Meta officially released quantized versions of the Llama 3.2 1B and Llama 3.2 3B models on October 24, 2024. This new release marks a significant advancement following the open-sourcing of the Llama 3.2 model in September this year, signifying Meta's major step forward in optimizing deep learning models. With the increasing demand for mobile device applications, the importance of quantized models has become increasingly prominent. 图片1.png

Significant Advantages of Quantized Models

After quantization, the Llama 3.2 1B model shows significant improvements in several aspects. Firstly, the model size is reduced by an average of 56%, meaning users can load and run the model faster under the same hardware conditions. Secondly, RAM usage is reduced by an average of 41%, which is particularly important for resource-limited mobile devices. These improvements not only increase the model's speed by 2 to 4 times, greatly enhancing user experience, but also reduce the required energy consumption, making Llama 3.2 1B more suitable for various lightweight application scenarios.

Simply put, model quantization is a highly technical process that converts floating-point models to fixed-point models. This process helps us compress the model while reducing complexity, allowing deep learning models to run efficiently on mobile devices with weaker performance. As more intelligent applications enter mobile devices, the value of quantized models becomes increasingly apparent.

Exploring Technical Methods for Quantization

To ensure high performance of Llama 3.2 1B during the quantization process, Meta mainly adopted two methods:

Quantization-Aware Training (QAT): This method emphasizes model accuracy, ensuring that the model maintains high precision after quantization.

Post-Training Quantization (SpinQuant): This focuses on the model's portability, making Llama 3.2 1B compatible with various devices to meet different usage needs.

In this release, Meta also introduced two quantized versions each for Llama 3.2 1B and Llama 3.2 3B:

Llama 3.2 1B QLoRA

Llama 3.2 1B SpinQuant

Llama 3.2 3B QLoRA

Llama 3.2 3B SpinQuant

Performance Comparison and Practical Applications

Meta's tests found that the quantized Llama 3.2 1B model shows significant improvements in speed, RAM usage, and power consumption compared to the Llama BF16 model, while maintaining almost the same accuracy as the Llama BF16 version. Although the quantized model supports a token limit of 8,000 (compared to 128,000 in the original version), benchmark test results show that the quantized version's actual performance remains close to Llama BF16, greatly enhancing its practicality. 图片2.png 图片3.png 图片4.png

Meta also conducted field tests on multiple mobile platforms (including OnePlus 12, Samsung S24+/S22, and undisclosed Apple iOS devices), with results showing "good performance," laying the foundation for the success of the Llama 3.2 1B model in real-world applications.

XXAI to Introduce More AI Models Soon

The AI assistant software XXAI is about to receive a major update. In this update, XXAI will introduce more top-tier AI models, including not only the Llama 3.2 1B and Llama 3.2 3B mentioned in the article but also Gemini pro 1.5, Grok2, and Claude 3 Opus, which are among the top-ranked AI models in the market. Importantly, XXAI maintains consistent pricing, with the annual plan costing only \$9.9 per month, offering users unlimited access to top-tier AI at an affordable price. 图片5.png

Conclusion

The quantized versions of Llama 3.2 1B and Llama 3.2 3B are exemplary in achieving a successful balance between performance enhancement and energy efficiency. This innovation will drive the widespread application of artificial intelligence technology on mobile devices, enabling more intelligent applications to run smoothly on resource-constrained devices. As Meta continues to explore and break through, future intelligent devices will undoubtedly play a greater role in various fields.