Supporting Low-Latency CPS Using GPUs and Direct I/O Schemes
説明
Graphics processing units (GPUs) are increasingly being used for general purpose parallel computing. They provide significant performance gains over multi-core CPU systems, and are an easily accessible alternative to supercomputers. The architecture of general purpose GPU systems(GPGPU), however, poses challenges in efficiently transferring data among the host and device(s). Although commodity many core devices such as NVIDIA GPUs provide more than one way to move data around, it is unclear which method is most effective given a particular application. This presents difficulty in supporting latency-sensitive cyber-physical systems (CPS). In this work we present a new approach to data transfer in a heterogeneous computing system that allows direct communication between GPUs and other I/O devices. In addition to adding this functionality our system also improves communication between the GPU and host. We analyze the current vendor provided data communication mechanisms and identify which methods work best for particular tasks with respect to throughput, and total time to completion. Our method allows a new class of real-time cyber-physical applications to be implemented on a GPGPU system. The results of the experiments presented here show that GPU tasks can be completed in 34 percent less time than current methods. Furthermore, effective data throughput is at least as good as the current best performers. This work is part of concurrent development of Gdev, an open-source project to provide Linux operating system support of many-core device resource management.
収録刊行物
-
- 2012 IEEE International Conference on Embedded and Real-Time Computing Systems and Applications
-
2012 IEEE International Conference on Embedded and Real-Time Computing Systems and Applications 437-442, 2012-08-01
IEEE