Societal Bias in Vision-and-Language Datasets and Models

  • NAKASHIMA Yuta
    Institute for Datability Science, Osaka University
  • HIROTA Yusuke
    Graduate School of Information Science and Technology, Osaka University
  • WU Yankun
    Graduate School of Information Science and Technology, Osaka University
  • GARCIA Noa
    Institute for Datability Science, Osaka University

この論文をさがす

抄録

<p>Vision-and-Language is now one of the popular research areas, which lies between computer vision and natural language processing. Researchers have been tackling various tasks offered by dedicated datasets, such as image captioning and visual question answering, and built a variety of models for state-of-the-art performance. At the same time, people are aware of the bias in these models, which can be especially harmful when the bias involves demographic attributes. This paper introduces our recent two works presented at IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023. The first work sheds light on social bias in a large-scale, uncurated dataset, which is indispensable for training recent models. The second work presents a model-agnostic framework to mitigate gender bias for arbitrary image captioning models. This paper gives high-level ideas about these works, so interested readers may refer to the original works.12,16)</p>

収録刊行物

  • 日本画像学会誌

    日本画像学会誌 62 (6), 599-609, 2023-12-10

    一般社団法人 日本画像学会

詳細情報 詳細情報について

問題の指摘

ページトップへ