Post-editing of Hallucinations by Prompt-tuning
-
- HATAKEYAMA Haruki
- Tokyo Denki University
-
- MORIWAKI Keita
- Tokyo Denki University
-
- SHUZO Masaki
- Tokyo Denki University
-
- MAEDA Eisaku
- Tokyo Denki University
Bibliographic Information
- Other Title
-
- Prompt-tuningによるHallucinationの事後修正
Abstract
<p>Large language models (LLMs) are increasingly being used for various language processing tasks. However, LLMs have been pointed out to have the hallucination problem of generating information that are inconsistent with the facts. To solve this problem, methods have been proposed to construct hallucination detectors and correctors using machine learning. However, these methods have not yet solved the problem adequately, because false detections by detectors and excessive corrections by correctors have occurred. On the other hand, there are methods that use LLMs themselves to detect and correct hallucination, but these methods use pipeline processing with multi prompts, and thus do not provide an essential solution to false positives and over-corrections. In this study, we propose a post-editing method using a single prompt LLM. Focusing on the hallucination of numerals and proper nouns, we compared the proposed method with existing methods and confirmed its effectiveness.</p>
Journal
-
- JSAI Technical Report, SIG-SLUD
-
JSAI Technical Report, SIG-SLUD 100 (0), 101-106, 2024-02-20
The Japanese Society for Artificial Intelligence
- Tweet
Details 詳細情報について
-
- CRID
- 1390580682415454336
-
- ISSN
- 24364576
- 09185682
-
- Text Lang
- ja
-
- Data Source
-
- JaLC
-
- Abstract License Flag
- Allowed