書誌事項
- タイトル別名
-
- The DNN Learning Method for Few Training Data via Knowledge Transfer
- テンイ ガクシュウ ト チシキ ジョウリュウ ニ ヨル ショウスウ ガクシュウ データ ノ タメ ノ DNN ガクシュウホウ
この論文をさがす
抄録
<p>Deep Neural Network (DNN) models have a great deal of parameters. It allows DNN to obtain good performance, however it also causes some problems. The first one is that learning of huge parameters requires enormous learning data for training DNN. The second one is that high-spec devices are requested because learning of huge parameters is computational complexity. These problems prevent the installation of DNN for any real tasks. To solve these problems, we propose a new learning method of DNN by combining transfer learning and knowledge distillation. The characteristic point of our proposed method is that we learn the DNN parameters by applying the techniques mentioned above simultaneously, i.e., we transfer the feature map of teacher DNN to student DNN, which is smaller than teacher DNN.</p>
収録刊行物
-
- 電気学会論文誌C(電子・情報・システム部門誌)
-
電気学会論文誌C(電子・情報・システム部門誌) 140 (6), 664-672, 2020-06-01
一般社団法人 電気学会
- Tweet
キーワード
詳細情報 詳細情報について
-
- CRID
- 1390003825184612864
-
- NII論文ID
- 130007850265
-
- NII書誌ID
- AN10065950
-
- ISSN
- 13488155
- 03854221
-
- NDL書誌ID
- 030469467
-
- 本文言語コード
- ja
-
- データソース種別
-
- JaLC
- NDL
- Crossref
- CiNii Articles
-
- 抄録ライセンスフラグ
- 使用不可