Optimization of overtraining and overgeneralization
説明
The task of any supervised classifier is to assign optimum boundaries in the input space, for the different class membership. This is done using informations from the available set of known samples. This mapping of sample position in the input space to sample class is further used to classify unknown samples. The available set of known sample is generally a finite set. A boundary exactly defined by those finite sample set is usually not the best boundary to classify the new unknown samples. We end up with an overfitted boundary, i.e. an overtrained classifier, resulting in poor classification for unknown new samples. We therefore need to smooth the boundary to be able to generalize for the unknown samples. Depending on the number of known samples and the dimension of the actual solution, there will be a certain amount of smoothness, which is optimum for generalization. In this paper, we focus on this problem. We introduce some practical ways to arrive at optimum smoothness, with regards to single hidden layer neural network classifier using radial basis function.
収録刊行物
-
- Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan)
-
Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan) 3 2257-2262, 2005-08-24
IEEE