Investigation of Expert Knowledge Extraction Using Pre-trained Language Models
-
- ASANO Seiya
- The University of Tokyo
-
- ISONUMA Masaru
- The University of Tokyo The University of Edinburgh
-
- ASATANI Kimitaka
- The University of Tokyo
-
- NOMURA Misuzu
- Daikin Industries, Ltd.
-
- MORI Junichiro
- The University of Tokyo RIKEN
-
- SAKATA Ichiro
- The University of Tokyo
Bibliographic Information
- Other Title
-
- 事前学習済み言語モデルによる専門知識抽出の検討
Abstract
<p>In recent years, there has been a lot of research focused on using language models instead of knowledge bases. Language models have many advantages compared to structured knowledge bases, such as not requiring manual definition of information attributes and relationships and being able to search more data in a more flexible and efficient manner. However, their performance is still developing, and there are still many hurdles to overcome, such as the inability to predict compound nouns. This study specifically focused on the knowledge of specialized compound nouns related to chemistry and investigated how accurately knowledge in a specific field could be extracted. Specifically, by using SciFive, which was further trained with T5 on biomedical papers, and by performing additional training on abstract data contained in Scopus, the study aimed to improve the accuracy of extracting specialized knowledge in chemistry. The results confirmed how accuracy changes depending on the amount of data used for additional training, with a decrease in accuracy with less data and an improvement in accuracy with relatively more data. These results demonstrate further potential for attempts to extract knowledge from language models.</p>
Journal
-
- Proceedings of the Annual Conference of JSAI
-
Proceedings of the Annual Conference of JSAI JSAI2023 (0), 1E3GS605-1E3GS605, 2023
The Japanese Society for Artificial Intelligence
- Tweet
Details 詳細情報について
-
- CRID
- 1390859758174443904
-
- ISSN
- 27587347
-
- Text Lang
- ja
-
- Data Source
-
- JaLC
-
- Abstract License Flag
- Disallowed