Combination of fast and slow learning neural networks for quick adaptation and pruning redundant cells
説明
One advantage of the neural network approach is the learning of many instances with a small number of hidden units. However, the small size of neural networks usually necessitates many repeats of the gradient descent algorithm for the learning. To realize quick adaptation of the small size of neural networks, the paper presents a learning system consisting of several neural networks: a fast-learning network (F-Net), a slow-learning network (S-Net) and a main network (Main-Net). The F-Net learns new instances very quickly like k-nearest neighbors, while the S-Net learns the output of the F-Net with a small number of hidden units. The resultant parameter of the S-Net is moved to the Main-Net, which is only for recognition. During the learning of the S-Net, the system does not learn any new instances like the sleeping biological systems.
収録刊行物
-
- IEEE SMC'99 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.99CH37028)
-
IEEE SMC'99 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.99CH37028) 3 390-395, 2003-01-20
IEEE