Event Knowledge in Large Language Models: The Gap Between the Impossible and the Unlikely

  • Carina Kauf
    Department of Brain and Cognitive Sciences Massachusetts Institute of Technology
  • Anna A. Ivanova
    Department of Brain and Cognitive Sciences Massachusetts Institute of Technology
  • Giulia Rambelli
    Department of Modern Languages, Literatures and Cultures University of Bologna
  • Emmanuele Chersoni
    Department of Chinese and Bilingual Studies Hong Kong Polytechnic University
  • Jingyuan Selena She
    Department of Brain and Cognitive Sciences Massachusetts Institute of Technology
  • Zawad Chowdhury
    Department of Mathematics University of Washington
  • Evelina Fedorenko
    Department of Brain and Cognitive Sciences Massachusetts Institute of Technology
  • Alessandro Lenci
    Department of Philology, Literature, and Linguistics University of Pisa

説明

<jats:title>Abstract</jats:title><jats:p>Word co‐occurrence patterns in language corpora contain a surprising amount of conceptual knowledge. Large language models (LLMs), trained to predict words in context, leverage these patterns to achieve impressive performance on diverse semantic tasks requiring world knowledge. An important but understudied question about LLMs’ semantic abilities is whether they acquire generalized knowledge of common events. Here, we test whether five pretrained LLMs (from 2018's BERT to 2023's MPT) assign a higher likelihood to plausible descriptions of agent−patient interactions than to minimally different implausible versions of the same event. Using three curated sets of minimal sentence pairs (total <jats:italic>n</jats:italic> = 1215), we found that pretrained LLMs possess substantial event knowledge, outperforming other distributional language models. In particular, they almost always assign a higher likelihood to possible versus impossible events (<jats:italic>The teacher bought the laptop</jats:italic> vs. <jats:italic>The laptop bought the teacher</jats:italic>). However, LLMs show less consistent preferences for likely versus unlikely events (<jats:italic>The nanny tutored the boy</jats:italic> vs. <jats:italic>The boy tutored the nanny</jats:italic>). In follow‐up analyses, we show that (i) LLM scores are driven by both plausibility and surface‐level sentence features, (ii) LLM scores generalize well across syntactic variants (active vs. passive constructions) but less well across semantic variants (synonymous sentences), (iii) some LLM errors mirror human judgment ambiguity, and (iv) sentence plausibility serves as an organizing dimension in internal LLM representations. Overall, our results show that important aspects of event knowledge naturally emerge from distributional linguistic patterns, but also highlight a gap between representations of possible/impossible and likely/unlikely events.</jats:p>

収録刊行物

被引用文献 (1)*注記

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ