Skip to content

Instantly share code, notes, and snippets.

@joshua-taylor
Created October 10, 2020 09:45
Show Gist options
  • Save joshua-taylor/1ec22f976826f707c057b2d2f1deeefb to your computer and use it in GitHub Desktop.
Save joshua-taylor/1ec22f976826f707c057b2d2f1deeefb to your computer and use it in GitHub Desktop.
sPacy tokenize
nlp = spacy.load("en_core_web_sm")
tok_text=[] # OUTPUT for our tokenised corpus
text = df.text.str.lower().values
text = [fix_text(str(i)) for i in text]
#Tokenising using SpaCy:
for doc in tqdm(nlp.pipe(text, n_threads=2, disable=["tagger", "parser","ner"])):
tok = [t.text for t in doc if (t.is_ascii and not t.is_punct and not t.is_space)]
tok_text.append(tok)
@joshua-taylor
Copy link
Author

Hi Alex - yes that is right it is just the standard function from ftfy - it can be removed with no impact to the rest of the code if you do not have any text encoding issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment