Metacognition in Large Language Models

  • YOSHIZAWA Shun
    Tokai university
  • ONZO Ayako
    Graduate School of Arts and Sciences, The University of Tokyo
  • NOZAWA Shinichi
    Graduate School of Arts and Sciences, The University of Tokyo
  • TAKANO Tsugumi
    Graduate School of Arts and Sciences, The University of Tokyo
  • MOGI Ken
    Graduate School of Arts and Sciences, The University of Tokyo Sony Computer Science Laboratories

Bibliographic Information

Other Title
  • 大規模言語モデルのメタ認知
  • Does GPT-4 have the metacognition?
  • GPT-4はメタ認知を持つか?

Description

<p>As one of the Large Language Models (LLMs), the Chat Generative Pre-Trained Transformer (ChatGPT) has been analysed and evaluated about its cognitive capabilities. For example, the ChatGPT showed robust performance in false belief tasks, suggesting they have a theory of mind. It might be important to assess and clarify whether the GPT-4 exhibits metacognitive abilities, which is how accurately they can be aware of their own performances. Here we investigate the general metacognitive abilities of the LLMs by analysing the ChatGPT's and humans confidence judgements. Human subjects tended to be less confident when they answered incorrectly than when they answered correctly. However, the GPT-4 showed high confidence even in the questions that they could not answer correctly. These results suggest that GPT-4 lacks specific metacognitive abilities.</p>

Journal

Details 詳細情報について

Report a problem

Back to top