Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Text pre-processing
import nltk
from nltk.tokenize import word_tokenize
from nltk.text import Text
# read in text data
file = open("crawl-for-parallel-corpora/DataSet/luganda.txt", "r")
raw = file.read()
# tokenize
tokens = word_tokenize(raw)
# remove punctuation, numbers and make everything lowercase
tokens = [word.lower() for word in tokens if word.isalpha()]
# write output to file
f = open("luganda", "w")
f.write(' '.join(tokens))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.