Deep unsupervised pixelization

  • Chu Han
    The Chinese University of Hong Kong, and South China University of Technology
  • Qiang Wen
    South China University of Technology
  • Shengfeng He
    South China University of Technology
  • Qianshu Zhu
    South China University of Technology
  • Yinjie Tan
    South China University of Technology
  • Guoqiang Han
    South China University of Technology
  • Tien-Tsin Wong
    The Chinese University of Hong Kong and Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, SIAT

説明

<jats:p> In this paper, we present a novel unsupervised learning method for pixelization. Due to the difficulty in creating pixel art, preparing the paired training data for supervised learning is impractical. Instead, we propose an unsupervised learning framework to circumvent such difficulty. We leverage the dual nature of the pixelization and depixelization, and model these two tasks in the same network in a bi-directional manner with the input itself as training supervision. These two tasks are modeled as a cascaded network which consists of three stages for different purposes. <jats:italic>GridNet</jats:italic> transfers the input image into multi-scale grid-structured images with different aliasing effects. <jats:italic>PixelNet</jats:italic> associated with <jats:italic>GridNet</jats:italic> to synthesize pixel arts with sharp edges and perceptually optimal local structures. <jats:italic>DepixelNet</jats:italic> connects the previous network and aims to recover the pixelized result to the original image. For the sake of unsupervised learning, the mirror loss is proposed to hold the reversibility of feature representations in the process. In addition, adversarial, L1, and gradient losses are involved in the network to obtain pixel arts by retaining color correctness and smoothness. We show that our technique can synthesize crisper and perceptually more appropriate pixel arts than state-of-the-art image downscaling methods. We evaluate the proposed method with extensive experiments on many images. The proposed method outperforms state-of-the-art methods in terms of visual quality and user preference. </jats:p>

収録刊行物

被引用文献 (2)*注記

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ