Deep learning predictions of galaxy merger stage and the importance of observational realism

  • Connor Bottrell
    Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8P 1A1, Canada
  • Maan H Hani
    Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8P 1A1, Canada
  • Hossen Teimoorinia
    Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8P 1A1, Canada
  • Sara L Ellison
    Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8P 1A1, Canada
  • Jorge Moreno
    Department of Physics and Astronomy, Pomona College, Claremont, CA 91711, USA
  • Paul Torrey
    Department of Astronomy, University of Florida, 211 Bryant Space Science Center, Gainesville, FL 32611, USA
  • Christopher C Hayward
    Center for Computational Astrophysics, Flatiron Institute, 162 Fifth Avenue, New York, NY 10010, USA
  • Mallory Thorp
    Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8P 1A1, Canada
  • Luc Simard
    National Research Council of Canada, 5071 West Saanich Road, Victoria, British Columbia V9E 2E7, Canada
  • Lars Hernquist
    Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA

説明

<jats:title>ABSTRACT</jats:title> <jats:p>Machine learning is becoming a popular tool to quantify galaxy morphologies and identify mergers. However, this technique relies on using an appropriate set of training data to be successful. By combining hydrodynamical simulations, synthetic observations, and convolutional neural networks (CNNs), we quantitatively assess how realistic simulated galaxy images must be in order to reliably classify mergers. Specifically, we compare the performance of CNNs trained with two types of galaxy images, stellar maps and dust-inclusive radiatively transferred images, each with three levels of observational realism: (1) no observational effects (idealized images), (2) realistic sky and point spread function (semirealistic images), and (3) insertion into a real sky image (fully realistic images). We find that networks trained on either idealized or semireal images have poor performance when applied to survey-realistic images. In contrast, networks trained on fully realistic images achieve 87.1 per cent classification performance. Importantly, the level of realism in the training images is much more important than whether the images included radiative transfer, or simply used the stellar maps ($87.1{{\ \rm per\ cent}}$ compared to $79.6{{\ \rm per\ cent}}$ accuracy, respectively). Therefore, one can avoid the large computational and storage cost of running radiative transfer with a relatively modest compromise in classification performance. Making photometry-based networks insensitive to colour incurs a very mild penalty to performance with survey-realistic data ($86.0{{\ \rm per\ cent}}$ with r-only compared to $87.1{{\ \rm per\ cent}}$ with gri). This result demonstrates that while colour can be exploited by colour-sensitive networks, it is not necessary to achieve high accuracy and so can be avoided if desired. We provide the public release of our statistical observational realism suite, RealSim, as a companion to this paper.</jats:p>

収録刊行物

被引用文献 (2)*注記

もっと見る

問題の指摘

ページトップへ