Training of Multiple and Mixed Tasks with a Single Network Using Feature Modulation
説明
In recent years, multi-task learning (MTL) for image translation tasks has been actively explored. For MTL image translation, a network consisting of a shared encoder and multiple task-specific decoders is commonly used. In this case, half parts of the network are task-specific, which brings a significant increase in the number of parameters when the number of tasks increases. Therefore, task-specific parts should be as small as possible. In this paper, we propose a method for MTL image translation using a single network with negligibly small task-specific parts, in which we share not only the encoder part but also the decoder part. In the proposed method, activation signals are adjusted for each task using Feature-wise Linear Modulation (FiLM) which performs affine transformation based on task conditional signals. In addition, we tried to let a single network learn mixing of heterogeneous tasks such as a mix of semantic segmentation and style transfer. With several experiments, we demonstrate that a single network is able to learn heterogeneous image translation tasks and their mixed tasks by following our proposed method. In addition, despite its small model size, our network achieves better performance than some of the latest baselines in most of the individual tasks.