Noise suppression in training data for improving generalization
説明
Multilayer feedforward neural networks are trained using the error backpropagation (BP) algorithm. This algorithm minimizes the error between outputs of a neural network (NN) and training data. Hence, in the case of noisy training data, a trained network memorizes noisy outputs for given inputs. Such learning is called rote memorization learning (RML). In this paper we propose error correcting memorization learning (CML). It can suppress noise in training data. In order to evaluate generalization ability of CML, it is compared with the projection learning (PL) criterion. It is theoretically proved that although CML merely suppresses noise in training data, it provides the same generalization as PL under some necessary and sufficient condition.
収録刊行物
-
- 1998 IEEE International Joint Conference on Neural Networks Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98CH36227)
-
1998 IEEE International Joint Conference on Neural Networks Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98CH36227) 3 2236-2241, 2002-11-27
IEEE