Studies on Model Protection for Machine Learning and Its Research Trends
-
- YANAI Naoto
- Osaka University
-
- IWAHANA Kazuki
- Osaka University
-
- KITAI Hiromasa
- Osaka University
-
- SHIKATA Toshiki
- Osaka University
-
- TAKEMURA Tatsuya
- Osaka University
-
- CRUZ Jason PAUL
- Osaka University
Bibliographic Information
- Other Title
-
- 機械学習のモデル保護の研究とその周辺動向
Abstract
Recent studies on privacy-preserving machine learning have focused on protection of data for inference. However, we note that protection of a model itself is another important aspect since an owner often needs an expensive cost to release the model. In this paper, we discuss the model protection from two standpoints, i.e., a model utilized in an adversary's environment and that independently of the environment. First, from the former side, we present model obliviousness whereby a model is hosted without revealing the model itself, i.e., in an encrypted fashion. Loosely speaking, by utilizing secure computation which enables us to compute an output without revealing inputs, an inference process is executed with an encrypted model. Although the secure computation often needs a heavy computational cost, we propose a method to improve throughput of the inference in a manner that structures of neural networks are modified. Next, from the latter side, we present model extraction attacks where an adversary can obtain a model, called substitute model, via inference results without the training data. In particular, we show that the adversary can obtain a substitute model efficiently by biasing outputs of a target model. Potential countermeasures of the model extraction attacks are discussed as well.
Journal
-
- 電子情報通信学会論文誌B 通信
-
電子情報通信学会論文誌B 通信 J104-B (10), 742-760, 2021-10-01
電子情報通信学会
- Tweet
Details 詳細情報について
-
- CRID
- 1390571039717389696
-
- ISSN
- 18810209
-
- Text Lang
- ja
-
- Data Source
-
- JaLC
-
- Abstract License Flag
- Disallowed