Continual Learning Based on Amortized Inference

Bibliographic Information

Other Title
  • 償却推論にもとづいた継続学習

Description

<p>In continual learning, studies are being actively conducted on techniques that can cope with an increase in tasks while preventing catastrophic forgetting, in which the accuracy of past tasks drops significantly when learning multiple tasks sequentially. In this study, we propose an Continual Amortized Learning Model (CALM) based on the structure of the Neural Process as a new method that saves the network of past tasks and does not learn by adding training data. CALM consists of two neural networks: Task Weight Encoder, that calculates the task-specific weight, and Feature Extractor, that extracts the features of input data. By applying task-specific weights to the features of the input image, task-specific outputs are possible while using a common network for all the tasks. In the experiment, we worked on task incremental learning of Split-MNIST, and verified that the task accuracy was maintained even when learning was performed sequentially using the proposed method.</p>

Journal

Details 詳細情報について

  • CRID
    1390566775142828800
  • NII Article ID
    130007856955
  • DOI
    10.11517/pjsai.jsai2020.0_2j5gs202
  • ISSN
    27587347
  • Text Lang
    ja
  • Data Source
    • JaLC
    • CiNii Articles
  • Abstract License Flag
    Disallowed

Report a problem

Back to top