Unsupervised neural network models of the ventral visual stream

  • Chengxu Zhuang
    Department of Psychology, Stanford University, Stanford, CA 94305;
  • Siming Yan
    Department of Computer Science, The University of Texas at Austin, Austin, TX 78712;
  • Aran Nayebi
    Neurosciences PhD Program, Stanford University, Stanford, CA 94305;
  • Martin Schrimpf
    Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139;
  • Michael C. Frank
    Department of Psychology, Stanford University, Stanford, CA 94305;
  • James J. DiCarlo
    Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139;
  • Daniel L. K. Yamins
    Department of Psychology, Stanford University, Stanford, CA 94305;

説明

<jats:title>Significance</jats:title> <jats:p>Primates show remarkable ability to recognize objects. This ability is achieved by their ventral visual stream, multiple hierarchically interconnected brain areas. The best quantitative models of these areas are deep neural networks trained with human annotations. However, they receive more annotations than infants, making them implausible models of the ventral stream development. Here, we report that recent progress in unsupervised learning has largely closed this gap. We find the networks learned with recent unsupervised methods achieve prediction accuracy in the ventral stream that equals or exceeds that of today’s best models. These results illustrate a use of unsupervised learning to model a brain system and present a strong candidate for a biologically plausible computational theory of sensory learning.</jats:p>

収録刊行物

被引用文献 (4)*注記

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ