- 【Updated on May 12, 2025】 Integration of CiNii Dissertations and CiNii Books into CiNii Research
- Trial version of CiNii Research Knowledge Graph Search feature is available on CiNii Labs
- 【Updated on June 30, 2025】Suspension and deletion of data provided by Nikkei BP
- Regarding the recording of “Research Data” and “Evidence Data”
All in How You Ask for It: Simple Black-Box Method for Jailbreak Attacks
Description
Large Language Models (LLMs), such as ChatGPT, encounter ‘jailbreak’ challenges, wherein safeguards are circumvented to generate ethically harmful prompts. This study introduces a straightforward black-box method for efficiently crafting jailbreak prompts that bypass LLM defenses. Our technique iteratively transforms harmful prompts into benign expressions directly utilizing the target LLM, predicated on the hypothesis that LLMs can autonomously generate expressions that evade safeguards. Through experiments conducted with ChatGPT (GPT-3.5 and GPT-4) and Gemini-Pro, our method consistently achieved an attack success rate exceeding 80% within an average of five iterations for forbidden questions and proved to be robust against model updates. The jailbreak prompts generated were not only naturally worded and succinct, but also challenging to defend against. These findings suggest that the creation of effective jailbreak prompts is less complex than previously believed, underscoring the heightened risk posed by black-box jailbreak attacks.
Journal
-
- Applied Sciences
-
Applied Sciences 14 (9), 3558-, 2024-04-23
MDPI
- Tweet
Details 詳細情報について
-
- CRID
- 1050863097246484352
-
- ISSN
- 20763417
-
- HANDLE
- 10228/0002000612
-
- Text Lang
- en
-
- Article Type
- journal article
-
- Data Source
-
- IRDB