Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
custom tokenizer
doc = nlp("gimme that book")
print([w.text for w in doc])
from spacy.symbols import ORTH
special_case = [{ORTH: "gim"}, {ORTH: "me"}]
nlp.tokenizer.add_special_case("gimme", special_case)
print([w.text for w in doc])
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment