- 【Updated on May 12, 2025】 Integration of CiNii Dissertations and CiNii Books into CiNii Research
- Trial version of CiNii Research Knowledge Graph Search feature is available on CiNii Labs
- 【Updated on June 30, 2025】Suspension and deletion of data provided by Nikkei BP
- Regarding the recording of “Research Data” and “Evidence Data”
Demystifying Parallel and Distributed Deep Learning
-
- Tal Ben-Nun
- ETH Zurich, Zürich, Switzerland
-
- Torsten Hoefler
- ETH Zurich, Zürich, Switzerland
Bibliographic Information
- Other Title
-
- An In-depth Concurrency Analysis
Search this article
Description
<jats:p>Deep Neural Networks (DNNs) are becoming an important tool in modern computing applications. Accelerating their training is a major challenge and techniques range from distributed algorithms to low-level circuit design. In this survey, we describe the problem from a theoretical perspective, followed by approaches for its parallelization. We present trends in DNN architectures and the resulting implications on parallelization strategies. We then review and model the different types of concurrency in DNNs: from the single operator, through parallelism in network inference and training, to distributed deep learning. We discuss asynchronous stochastic optimization, distributed system architectures, communication schemes, and neural architecture search. Based on those approaches, we extrapolate potential directions for parallelism in deep learning.</jats:p>
Journal
-
- ACM Computing Surveys
-
ACM Computing Surveys 52 (4), 1-43, 2019-08-30
Association for Computing Machinery (ACM)
- Tweet
Details 詳細情報について
-
- CRID
- 1360011143852866944
-
- DOI
- 10.1145/3320060
-
- ISSN
- 15577341
- 03600300
-
- Data Source
-
- Crossref