Post-editing of Hallucinations by Prompt-tuning

DOI

Bibliographic Information

Other Title
  • Prompt-tuningによるHallucinationの事後修正

Abstract

<p>Large language models (LLMs) are increasingly being used for various language processing tasks. However, LLMs have been pointed out to have the hallucination problem of generating information that are inconsistent with the facts. To solve this problem, methods have been proposed to construct hallucination detectors and correctors using machine learning. However, these methods have not yet solved the problem adequately, because false detections by detectors and excessive corrections by correctors have occurred. On the other hand, there are methods that use LLMs themselves to detect and correct hallucination, but these methods use pipeline processing with multi prompts, and thus do not provide an essential solution to false positives and over-corrections. In this study, we propose a post-editing method using a single prompt LLM. Focusing on the hallucination of numerals and proper nouns, we compared the proposed method with existing methods and confirmed its effectiveness.</p>

Journal

Details 詳細情報について

  • CRID
    1390580682415454336
  • DOI
    10.11517/jsaislud.100.0_101
  • ISSN
    24364576
    09185682
  • Text Lang
    ja
  • Data Source
    • JaLC
  • Abstract License Flag
    Allowed

Report a problem

Back to top