Difficulty in learning vs. network size

説明

While training neural network to represent distribution of a sample set, generally two aspects are considered: its representation capability to be able to describe the complex distribution of the sample set and the ability to generalize so that novel samples could be mapped correctly. The general conclusion is that, the smallest network capable of representing the sample distribution is the best choice, as far as generalization is concerned. We here have introduced a term difficulty in learning. We have shown that for smallest network the difficulty in learning is very high. This is especially true when the sample distribution is complex. A slightly bigger network is more suitable, especially when noise is low, as far as ease of learning is concerned.

収録刊行物

詳細情報 詳細情報について

問題の指摘

ページトップへ