Continual Learning Based on Amortized Inference
-
- KAWASHIMA Hirono
- Keio University
-
- KAWANO Makoto
- The University of Tokyo
-
- KUMAGAI Wataru
- RIKEN AIP
-
- MATSUI Kota
- RIKEN AIP
-
- NAKAZAWA Jin
- Keio University
Bibliographic Information
- Other Title
-
- 償却推論にもとづいた継続学習
Description
<p>In continual learning, studies are being actively conducted on techniques that can cope with an increase in tasks while preventing catastrophic forgetting, in which the accuracy of past tasks drops significantly when learning multiple tasks sequentially. In this study, we propose an Continual Amortized Learning Model (CALM) based on the structure of the Neural Process as a new method that saves the network of past tasks and does not learn by adding training data. CALM consists of two neural networks: Task Weight Encoder, that calculates the task-specific weight, and Feature Extractor, that extracts the features of input data. By applying task-specific weights to the features of the input image, task-specific outputs are possible while using a common network for all the tasks. In the experiment, we worked on task incremental learning of Split-MNIST, and verified that the task accuracy was maintained even when learning was performed sequentially using the proposed method.</p>
Journal
-
- Proceedings of the Annual Conference of JSAI
-
Proceedings of the Annual Conference of JSAI JSAI2020 (0), 2J5GS202-2J5GS202, 2020
The Japanese Society for Artificial Intelligence
- Tweet
Keywords
Details 詳細情報について
-
- CRID
- 1390566775142828800
-
- NII Article ID
- 130007856955
-
- ISSN
- 27587347
-
- Text Lang
- ja
-
- Data Source
-
- JaLC
- CiNii Articles
-
- Abstract License Flag
- Disallowed