Studies on Model Protection for Machine Learning and Its Research Trends

DOI

Bibliographic Information

Other Title
  • 機械学習のモデル保護の研究とその周辺動向

Abstract

Recent studies on privacy-preserving machine learning have focused on protection of data for inference. However, we note that protection of a model itself is another important aspect since an owner often needs an expensive cost to release the model. In this paper, we discuss the model protection from two standpoints, i.e., a model utilized in an adversary's environment and that independently of the environment. First, from the former side, we present model obliviousness whereby a model is hosted without revealing the model itself, i.e., in an encrypted fashion. Loosely speaking, by utilizing secure computation which enables us to compute an output without revealing inputs, an inference process is executed with an encrypted model. Although the secure computation often needs a heavy computational cost, we propose a method to improve throughput of the inference in a manner that structures of neural networks are modified. Next, from the latter side, we present model extraction attacks where an adversary can obtain a model, called substitute model, via inference results without the training data. In particular, we show that the adversary can obtain a substitute model efficiently by biasing outputs of a target model. Potential countermeasures of the model extraction attacks are discussed as well.

Journal

Details 詳細情報について

Report a problem

Back to top