AI Accelerator Support in Onnx-mlir Deep Learning Compiler

この論文をさがす

抄録

Onnx-mlir is an open-source compiler to compile artifical intelligence (AI) models in the Open Neural Network Exchange (ONNX) format into native code on different architectures such as x86, Power, and Z processors. It was built upon the Multi-Level Intermediate Representation (MLIR) infrastructure in the LLVM project and relies on the MLIR concept of dialects to implement its functionality. In this paper, we present our work of extending onnx-mlir to generate and optimize code for the IBM Telum on-chip AI accelerator (zAIU) introduced in the IBM z16 mainframe. Specifically, we propose here two dialects: (1) zhigh dialect to represent high-level functions on zAIU, and (2) zlow dialect to represent low-level computation on zAIU. Each dialect facilitates its own characteristic set of graph-level and memory-level optimizations, respectively. We explain our extension of onnx-mlir by following several models through the proposed dialects and we include some early optimization work and performance results.

Onnx-mlir is an open-source compiler to compile artifical intelligence (AI) models in the Open Neural Network Exchange (ONNX) format into native code on different architectures such as x86, Power, and Z processors. It was built upon the Multi-Level Intermediate Representation (MLIR) infrastructure in the LLVM project and relies on the MLIR concept of dialects to implement its functionality. In this paper, we present our work of extending onnx-mlir to generate and optimize code for the IBM Telum on-chip AI accelerator (zAIU) introduced in the IBM z16 mainframe. Specifically, we propose here two dialects: (1) zhigh dialect to represent high-level functions on zAIU, and (2) zlow dialect to represent low-level computation on zAIU. Each dialect facilitates its own characteristic set of graph-level and memory-level optimizations, respectively. We explain our extension of onnx-mlir by following several models through the proposed dialects and we include some early optimization work and performance results.

収録刊行物

詳細情報 詳細情報について

問題の指摘

ページトップへ