Training of Deep Generative Models Using Several Loss Functions and its Application to Constrained Black-Box Optimization
-
- SAKAMOTO Naoki
- University of Tsukuba RIKEN AIP
-
- SATO Rei
- University of Tsukuba RIKEN AIP
-
- FUKUCHI Kazuto
- University of Tsukuba RIKEN AIP
-
- SAKUMA Jun
- University of Tsukuba RIKEN AIP
-
- AKIMOTO Youhei
- University of Tsukuba RIKEN AIP
Bibliographic Information
- Other Title
-
- 複数の損失関数を用いた深層生成モデルの訓練と制約付きブラックボックス最適化への適用
Description
<p>In constrained black-box optimization, optimizing the objective function is extremely difficult if the feasible domain X is a set of discrete feasible regions and even obtaining a feasible solution is difficult. This paper proposes a technique to transform the search space S into a simple one with almost no constraints. In detail, we create a map from the input space Z to X, Decoder G: Z -> X, and use Z of G as the search space to achieve the above transformation. To perform mapping to discrete regions, we make Decoder G concatenated small neural network models (NNs) with a shortcut connection, and we define loss functions for each NN. This prevents mode collapse, which is a well-known problem in deep generative models. In the experiments, we demonstrate the usefulness of the proposed technique using a test problem where the volume ratio of X to S is less than 1e-7.</p>
Journal
-
- Proceedings of the Annual Conference of JSAI
-
Proceedings of the Annual Conference of JSAI JSAI2021 (0), 1G3GS2b05-1G3GS2b05, 2021
The Japanese Society for Artificial Intelligence
- Tweet
Keywords
Details 詳細情報について
-
- CRID
- 1390006895525059584
-
- NII Article ID
- 130008051523
-
- Text Lang
- ja
-
- Data Source
-
- JaLC
- CiNii Articles
-
- Abstract License Flag
- Disallowed