JGLUE: Japanese General Language Understanding Evaluation
-
- Kurihara Kentaro
- Waseda University
-
- Kawahara Daisuke
- Waseda University
-
- Shibata Tomohide
- Yahoo Japan Corporation
Bibliographic Information
- Other Title
-
- JGLUE: 日本語言語理解ベンチマーク
- 「JGLUE: 日本語言語理解ベンチマーク」の経緯とその後
Description
<p>To develop high-performance natural language understanding (NLU) models, it is necessary to have a benchmark to evaluate and analyze NLU ability from various perspectives. The English NLU benchmark, GLUE (Wang et al. 2018), has been the forerunner, and benchmarks for languages other than English have been constructed, such as CLUE (Xu et al. 2020) for Chinese and FLUE (Le et al. 2020) for French. However, there is no such benchmark for Japanese, and this is a serious problem in Japanese NLP. We build a Japanese NLU benchmark, JGLUE, from scratch without translation to measure the general NLU ability in Japanese. JGLUE consists of three kinds of tasks: text classification, sentence pair classification, and QA. We hope that JGLUE will facilitate NLU research in Japanese. </p>
Journal
-
- Journal of Natural Language Processing
-
Journal of Natural Language Processing 29 (2), 711-717, 2022
The Association for Natural Language Processing
- Tweet
Details 詳細情報について
-
- CRID
- 1390573881058340608
-
- ISSN
- 21858314
- 13407619
-
- Text Lang
- ja
-
- Data Source
-
- JaLC
- Crossref
- OpenAIRE
-
- Abstract License Flag
- Disallowed