-
- Hiromu Yakura
- University of Tsukuba
-
- Jun Sakuma
- University of Tsukuba
説明
<jats:p>We propose a method to generate audio adversarial examples that can attack a state-of-the-art speech recognition model in the physical world. Previous work assumes that generated adversarial examples are directly fed to the recognition model, and is not able to perform such a physical attack because of reverberation and noise from playback environments. In contrast, our method obtains robust adversarial examples by simulating transformations caused by playback or recording in the physical world and incorporating the transformations into the generation process. Evaluation and a listening experiment demonstrated that our adversarial examples are able to attack without being noticed by humans. This result suggests that audio adversarial examples generated by the proposed method may become a real threat.</jats:p>
収録刊行物
-
- Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
-
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence 5334-5341, 2019-08
International Joint Conferences on Artificial Intelligence Organization
- Tweet
キーワード
- FOS: Computer and information sciences
- Computer Science - Machine Learning
- Sound (cs.SD)
- Computer Science - Cryptography and Security
- Machine Learning (stat.ML)
- Computer Science - Sound
- Machine Learning (cs.LG)
- Statistics - Machine Learning
- Audio and Speech Processing (eess.AS)
- FOS: Electrical engineering, electronic engineering, information engineering
- Cryptography and Security (cs.CR)
- Electrical Engineering and Systems Science - Audio and Speech Processing
詳細情報 詳細情報について
-
- CRID
- 1360287218834765568
-
- データソース種別
-
- Crossref
- KAKEN
- OpenAIRE