logo

Using the Power10 Chip to Speed Up AI Inferencing

Posted by Marbenz Antonio on March 20, 2023

IBM Archives - The Next Platform

An inferencing model refers to a type of model that has been trained to identify patterns of interest in data, to gain insights from the data.

Compared to training an artificial intelligence (AI) model, inferencing doesn’t require as much computing power. As a result, it’s feasible and even more energy-efficient to perform inferencing without additional hardware accelerators, like GPUs, and to do so on edge devices. It’s not uncommon for AI inferencing models to run on smartphones and similar devices using just the CPU. In fact, many picture and face filters found in social media phone apps rely on AI inferencing models.

IBM’s Power10 chip

IBM was a trailblazer in incorporating on-processor accelerators for inferencing into its IBM Power10 chip, which it dubbed the Matrix Math Accelerator (MMA) engines. By doing so, the Power10 platform can outpace other hardware architectures in terms of speed without requiring the use of additional GPUs, which would consume more energy. This means the Power10 chip can derive insights from data more quickly than any other chip architecture while consuming significantly less energy than GPU-based systems. That’s why it’s an optimal choice for AI applications.

When using IBM Power10 for AI, particularly for inferencing, AI DevOps teams don’t need to exert any additional effort. This is because data science libraries, including openBLAS, libATen, Eigen, and MLAS, among others, have already been optimized to utilize the Matrix Math Accelerator (MMA) engines. Consequently, AI frameworks that leverage these libraries, such as PyTorch, TensorFlow, and ONNX, are already able to take advantage of the on-chip acceleration. These optimized libraries can be accessed through the RocketCE channel on anaconda.org.

IBM Power10 can accelerate inferencing by utilizing reduced-precision data. Rather than using 32-bit floating point data, for instance, the inference model can be fed with 16-bit floating point data, which enables the processor to process twice as much data for inferencing simultaneously. This approach can be effective for some models without compromising the accuracy of the inferred data.

Inferencing is the final phase of the AI DevOps cycle, and the IBM Power10 platform was purposefully designed to be AI-optimized. As a result, clients can extract insights from data more cost-effectively, both in terms of energy efficiency and by reducing the requirement for additional accelerators.

Conclusions

Leveraging the IBM Power10 chip can significantly accelerate AI inferencing while reducing energy consumption and the need for additional accelerators. The Matrix Math Accelerator (MMA) engines built into the chip can enhance the speed and efficiency of inferencing processes without requiring any additional effort from AI DevOps teams. Furthermore, the ability to process reduced-precision data can further enhance the performance of the inferencing model without sacrificing accuracy. All of these factors make the IBM Power10 chip an ideal choice for clients seeking to extract insights from data cost-effectively.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Verified by MonsterInsights