- 【Updated on May 12, 2025】 Integration of CiNii Dissertations and CiNii Books into CiNii Research
- Trial version of CiNii Research Knowledge Graph Search feature is available on CiNii Labs
- 【Updated on June 30, 2025】Suspension and deletion of data provided by Nikkei BP
- Regarding the recording of “Research Data” and “Evidence Data”
AI Accelerator Support in Onnx-mlir Deep Learning Compiler
Search this article
Description
Onnx-mlir is an open-source compiler to compile artifical intelligence (AI) models in the Open Neural Network Exchange (ONNX) format into native code on different architectures such as x86, Power, and Z processors. It was built upon the Multi-Level Intermediate Representation (MLIR) infrastructure in the LLVM project and relies on the MLIR concept of dialects to implement its functionality. In this paper, we present our work of extending onnx-mlir to generate and optimize code for the IBM Telum on-chip AI accelerator (zAIU) introduced in the IBM z16 mainframe. Specifically, we propose here two dialects: (1) zhigh dialect to represent high-level functions on zAIU, and (2) zlow dialect to represent low-level computation on zAIU. Each dialect facilitates its own characteristic set of graph-level and memory-level optimizations, respectively. We explain our extension of onnx-mlir by following several models through the proposed dialects and we include some early optimization work and performance results.
Onnx-mlir is an open-source compiler to compile artifical intelligence (AI) models in the Open Neural Network Exchange (ONNX) format into native code on different architectures such as x86, Power, and Z processors. It was built upon the Multi-Level Intermediate Representation (MLIR) infrastructure in the LLVM project and relies on the MLIR concept of dialects to implement its functionality. In this paper, we present our work of extending onnx-mlir to generate and optimize code for the IBM Telum on-chip AI accelerator (zAIU) introduced in the IBM z16 mainframe. Specifically, we propose here two dialects: (1) zhigh dialect to represent high-level functions on zAIU, and (2) zlow dialect to represent low-level computation on zAIU. Each dialect facilitates its own characteristic set of graph-level and memory-level optimizations, respectively. We explain our extension of onnx-mlir by following several models through the proposed dialects and we include some early optimization work and performance results.
Journal
-
- 情報処理学会論文誌プログラミング(PRO)
-
情報処理学会論文誌プログラミング(PRO) 16 (2), 33-33, 2023-06-29
情報処理学会
- Tweet
Details 詳細情報について
-
- CRID
- 1050859616045258624
-
- NII Book ID
- AA11464814
-
- ISSN
- 18827802
-
- Text Lang
- en
-
- Article Type
- journal article
-
- Data Source
-
- IRDB