from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
text = '''why isn't my card working'''
encoded = tokenizer.encode(text, add_special_tokens=False)
text_tokenized = tokenizer.decode(encoded, clean_up_tokenization_spaces=False)
print(text_tokenized)
Created
August 26, 2020 06:34
-
-
Save nlpjoe/5087175be0629f85676daa07d3e8365a to your computer and use it in GitHub Desktop.
[transformers使用] #pytorch
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment