The TextTokenizer function extracts English, Chinese, or Japanese tokens from text. Examples of tokens are words, punctuation marks, and numbers. Tokenization is the first step of many types of text analysis.
TextTokenizer uses files that are preinstalled on ML Engine. For details, see Preinstalled Files That Functions Use.