-
- Linting Xue
- Google Research, USA. lintingx@google.com
-
- Aditya Barua
- Google Research, USA. adityabarua@google.com
-
- Noah Constant
- Google Research, USA. nconstant@google.com
-
- Rami Al-Rfou
- Google Research, USA. rmyeid@google.com
-
- Sharan Narang
- Google Research, USA. sharannarang@google.com
-
- Mihir Kale
- Google Research, USA. mihirkale@google.com
-
- Adam Roberts
- Google Research, USA. adarob@google.com
-
- Colin Raffel
- Google Research, USA. craffel@gmail.com
抄録
<jats:title>Abstract</jats:title> <jats:p>Most widely used pre-trained language models operate on sequences of tokens corresponding to word or subword units. By comparison, token-free models that operate directly on raw text (bytes or characters) have many benefits: They can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Because byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.1</jats:p>
収録刊行物
-
- Transactions of the Association for Computational Linguistics
-
Transactions of the Association for Computational Linguistics 10 291-306, 2022
MIT Press - Journals