Parallel Programming with Message Passing Library and Its Precision of Calculation.

  • TAMURA Katsuhiro
    Shizuoka Industrial Research Institute of Shizuoka Prefecture
  • INADOMI Yuichi
    National Institute of Advanced Industrial Science and Technology, Tsukuba Advanced Computing Center
  • NAGASHIMA Umpei
    National Institute of Advanced Industrial Science and Technology, Tsukuba Advanced Computing Center

Bibliographic Information

Other Title
  • メッセージ通信ライブラリを用いたプログラムの並列化例と計算速度および計算精度の評価
  • メッセージ ツウシン ライブラリ オ モチイタ プログラム ノ ヘイレツカレイ ト ケイサン ソクド オヨビ ケイサン セイド ノ ヒョウカ

Search this article

Description

Using two programs about summing up 0.1, which is the circulating decimal in binary numbers, 109 times, the efficiency of parallel processing for performance and accuracy of calculation was demonstrated. One program uses sequential summing up (program1), and the other involves summing up using partial sum technique of 104 times 105 times (program2). These programs were parallelized with a message passing interface: MPI. They were executed on 4 parallel computers, Alta Technology AltaCluster, Hitachi SR8000, IBM RS/6000 SP and SGI Origin2000, up to 8 processors. The performance is proportionally improved with increasing number of processors, because the communication process is small compared with the computation process. The computing precision was quite similar for the 4 computers. In the case of program1 the precision was improved drastically by increasing the number of processors, but little improvement was observed in the case of program2. It was clearly shown that the numerical error accumulation, namely the loss of digits, was avoided by parallel processing.

Journal

References(2)*help

See more

Details 詳細情報について

Report a problem

Back to top