<title>Model for selectively increasing learning sample number in character recognition</title>
この論文をさがす
説明
Increasing the sample size plays an important role in improving recognition accuracy. When it is difficult to collect additional character data written by new writers, distorted characters artificially generated from the original characters by a distortion model can serve as the additional data. This paper proposes a model for selecting those distorted characters that improve recognition accuracy. Binary images are used as a feature vector. In the experiments, recognition based on the k nearest neighbor rule is made for the handwritten zip code database, called IPTP CD-ROM1. Distorted characters are generated using a new model of nonlinear geometrical distortion. New learning samples consisting of the original ones and the distorted ones are generated iteratively. In this model, distortion parameter range is investigated to yield improved recognition accuracy. The results show that the iterative addition of slightly distorted characters improves recognition accuracy.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
収録刊行物
-
- SPIE Proceedings
-
SPIE Proceedings 2660 235-242, 1996-03-07
SPIE