Effective neural network training with adaptive learning rate based on training loss

DOI 機関リポジトリ (HANDLE) HANDLE Web Site Web Site ほか1件をすべて表示 一部だけ表示 被引用文献2件 オープンアクセス

この論文をさがす

説明

A method that uses an adaptive learning rate is presented for training neural networks. Unlike most conventional updating methods in which the learning rate gradually decreases during training, the proposed method increases or decreases the learning rate adaptively so that the training loss (the sum of cross-entropy losses for all training samples) decreases as much as possible. It thus provides a wider search range for solutions and thus a lower test error rate. The experiments with some well-known datasets to train a multilayer perceptron show that the proposed method is effective for obtaining a better test accuracy under certain conditions. (c) 2018 Elsevier Ltd. All rights reserved.

収録刊行物

被引用文献 (2)*注記

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ