Skip to content

Instantly share code, notes, and snippets.

@benmarwick
Last active October 24, 2017 10:18
Show Gist options
  • Star 6 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save benmarwick/6127413 to your computer and use it in GitHub Desktop.
Save benmarwick/6127413 to your computer and use it in GitHub Desktop.
Analysis of collocation of words in a text with R. Extracts LHS and RHS collocates of a word of interest over a user-defined span. Calculates frequency of collocates and mean distances. Inspired by http://www.antlab.sci.waseda.ac.jp/software.html
# R code for basic collocation statistics on a text corpus.
# Extracts LHS and RHS collocates of a word of interest
# over a user-defined span. Calculates frequency of
# collocates and mean distances.
examp1 <- "When discussing performance with colleagues, teaching, sending a bug report or
searching for guidance on mailing lists and here on SO, a reproducible example is often
asked and always helpful. What are your tips for creating an excellent example? How do
you paste data structures from r in a text format? What other information should you
include? Are there other tricks in addition to using dput(), dump() or structure()?
When should you include library() or require() statements? Which reserved words should
one avoid, in addition to c, df, data, etc? How does one make a great r reproducible
example? Sometimes the problem really isn't reproducible with a smaller piece of data,
no matter how hard you try, and doesn't happen with synthetic data (although it's
useful to show how you produced synthetic data sets that did not reproduce the
problem, because it rules out some hypotheses). Posting the data to the web
somewhere and providing a URL may be necessary. If the data can't be released
to the public at large but could be shared at all, then you may be able to
offer to e-mail it to interested parties (although this will cut down the
number of people who will bother to work on it). I haven't actually seen this
done, because people who can't release their data are sensitive about releasing
it any form, but it would seem plausible that in some cases one could still
post data if it were sufficiently anonymized/scrambled/corrupted slightly in
some way. If you can't do either of these then you probably need to hire a
consultant to solve your problem. You are most likely to get good help with
your R problem if you provide a reproducible example. A reproducible example
allows someone else to recreate your problem by just copying and pasting R
code. There are four things you need to include to make your example
reproducible: required packages, data, code, and a description of your
R environment. Packages should be loaded at the top of the script, so it's
easy to see which ones the example needs. The easiest way to include data
in an email is to use dput() to generate the R code to recreate it. For
example, to recreate the mtcars dataset in R, I'd perform the following
steps: Run dput(mtcars) in R Copy the output In my reproducible script,
type mtcars <- then paste. Spend a little bit of time ensuring that your
code is easy for others to read: make sure you've used spaces and your
variable names are concise, but informative, use comments to indicate
where your problem lies, do your best to remove everything that is not
related to the problem. The shorter your code is, the easier it is to
understand. Include the output of sessionInfo() as a comment. This summarises
your R environment and makes it easy to check if you're using an out-of-date
package. You can check you have actually made a reproducible example by
starting up a fresh R session and pasting your script in. Before putting
all of your code in an email, consider putting it on http://gist.github.com/.
It will give your code nice syntax highlighting, and you don't have to worry
about anything getting mangled by the email system. Do your homework before
posting: If it is clear that you have done basic background research, you are
far more likely to get an informative response. See also Further Resources
further down this page. Do help.search(keyword) and apropos(keyword) with
different keywords (type this at the R prompt). Do RSiteSearch(keyword)
with different keywords (at the R prompt) to search R functions, contributed
packages and R-Help postings. See ?RSiteSearch for further options and to
restrict searches. Read the online help for relevant functions (type
?functionname, e.g., ?prod, at the R prompt) If something seems to have
changed in R, look in the latest NEWS file on CRAN for information about
it. Search the R-faq and the R-windows-faq if it might be relevant
(http://cran.r-project.org/faqs.html) Read at least the relevant section
in An Introduction to R If the function is from a package accompanying a
book, e.g., the MASS package, consult the book before posting. The R Wiki
has a section on finding functions and documentation. Before asking a technical
question by e-mail, or in a newsgroup, or on a website chat board, do the following:
Try to find an answer by searching the archives of the forum you plan to post to.
Try to find an answer by searching the Web. Try to find an answer by reading the
manual. Try to find an answer by reading a FAQ. Try to find an answer by inspection
or experimentation. Try to find an answer by asking a skilled friend. If you're a
programmer, try to find an answer by reading the source code. When you ask your
question, display the fact that you have done these things first; this will help
establish that you're not being a lazy sponge and wasting people's time. Better
yet, display what you have learned from doing these things. We like answering
questions for people who have demonstrated they can learn from the answers.
Use tactics like doing a Google search on the text of whatever error message
you get (searching Google groups as well as Web pages). This might well take
you straight to fix documentation or a mailing list thread answering your question.
Even if it doesn't, saying “I googled on the following phrase but didn't get anything
that looked promising” is a good thing to do in e-mail or news postings requesting help,
if only because it records what searches won't help. It will also help to direct other
people with similar problems to your thread by linking the search terms to what will
hopefully be your problem and resolution thread. Take your time. Do not expect to be
able to solve a complicated problem with a few seconds of Googling. Read and understand
the FAQs, sit back, relax and give the problem some thought before approaching experts.
Trust us, they will be able to tell from your questions how much reading and thinking
you did, and will be more willing to help if you come prepared. Don't instantly fire
your whole arsenal of questions just because your first search turned up no answers
(or too many). Prepare your question. Think it through. Hasty-sounding questions get
hasty answers, or none at all. The more you do to demonstrate that having put thought
and effort into solving your problem before seeking help, the more likely you are to
actually get help. Beware of asking the wrong question. If you ask one that is based
on faulty assumptions, J. Random Hacker is quite likely to reply with a uselessly
literal answer while thinking Stupid question..., and hoping the experience of getting
what you asked for rather than what you needed will teach you a lesson."
# KWIC concordance
require(tm)
my.corpus <- Corpus(VectorSource(examp1))
# Some standard preprocessing
my.corpus <- tm_map(my.corpus, stripWhitespace)
my.corpus <- tm_map(my.corpus, tolower)
my.corpus <- tm_map(my.corpus, removePunctuation)
# 'not' is a stopword so let's not remove that
# my.corpus <- tm_map(my.corpus, removeWords, stopwords("english"))
# my.corpus <- tm_map(my.corpus, stemDocument)
my.corpus <- tm_map(my.corpus, removeNumbers)
#Tokenizer for n-grams and passed on to the term-document matrix constructor
library(RWeka)
span <- 4 # how many words either side of word of interest
span1 <- 1 + span * 2
ngramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = span1, max = span1))
dtm <- TermDocumentMatrix(my.corpus, control = list(tokenize = ngramTokenizer))
inspect(dtm)
# find ngrams that have the node word in them
word <- 'example'
subset_ngrams <- dtm$dimnames$Terms[grep(word, dtm$dimnames$Terms)]
# keep only ngrams with the word of interest in the middle. This
# removes duplicates and let's us see what's on either side
# of the word of interest
subset_ngrams <- subset_ngrams[sapply(subset_ngrams, function(i) {
tmp <- unlist(strsplit(i, split=" "))
tmp <- tmp[length(tmp) - span]
tmp} == word)]
# now find collocated word in the ngrams
# coloc <- "reproducible"
# subset_ngrams <- subset_ngrams[grep(coloc, subset_ngrams)]
# how many collocations?
# length(subset_ngrams)
# inspect them
# subset_ngrams
# how to find *all* collocates for my word of interest
# within the specified span? Right and left?
allwords <- paste(subset_ngrams, collapse = " ")
uniques <- unique(unlist(strsplit(allwords, split=" ")))
# LHS colocs
LHS <- data.frame(matrix(nrow = length(uniques), ncol = length(subset_ngrams)))
for(i in 1:length(subset_ngrams)){
# find position of unique words along ngram vector
pos1 <- sapply(uniques, function(x) which(x == unlist(strsplit(subset_ngrams[[i]], split=" "))))
# find position of word of interest along ngram vector
pos2 <- which(word == unlist(strsplit(subset_ngrams[[i]], split=" ")) )
# compute distance of all colocs to word of interest
dist <- lapply(pos1, function(i) pos2 - i )
# keep only +ve values
dist <- lapply(dist, function(i) i[i>0][1] )
# insert distance values into a vector to
# append into a data frame
tmp <- rep(NA, length(uniques))
tmp <- tmp[1:length(unlist(unname(dist)))] <- unlist(unname(dist))
LHS[,i] <- tmp
}
row.names(LHS) <- uniques
# compute mean distance between the two words
LHS_means <- rowMeans(LHS, na.rm = TRUE)
# also get coloc frequencies in spans
# function to count non-NA values
countN <- function ( v ) sum( !is.na( v ) )
LHS_freqs <- apply(LHS, 1, countN )
LHS_means <- data.frame(word = names(LHS_means),
mean_dist = unname(LHS_means),
freq = unname(LHS_freqs))
# sort by mean distance
LHS_means <- LHS_means[with(LHS_means, order(mean_dist)), ]
# sort by frequency
LHS_means <- LHS_means[with(LHS_means, order(-freq)), ]
# RHS colocs
RHS <- data.frame(matrix(nrow = length(uniques), ncol = length(subset_ngrams)))
for(i in 1:length(subset_ngrams)){
# find position of unique words along ngram vector
pos1 <- sapply(uniques, function(x) which(x == unlist(strsplit(subset_ngrams[[i]], split=" "))))
# find position of word of interest along ngram vector
pos2 <- which(word == unlist(strsplit(subset_ngrams[[i]], split=" ")) )
# compute distance of all colocs to word of interest
dist <- lapply(pos1, function(i) pos2 - i )
# keep only +ve values
dist <- lapply(dist, function(i) i[i<0][1] )
# insert distance values into a vector to
# append into a data frame
tmp <- rep(NA, length(uniques))
tmp <- tmp[1:length(unlist(unname(dist)))] <- unlist(unname(dist))
RHS[,i] <- tmp
}
row.names(RHS) <- uniques
# compute mean distance between the two words
RHS_means <- rowMeans(RHS, na.rm = TRUE)
# also get coloc frequencies in spans
# function to count non-NA values
countN <- function ( v ) sum( !is.na( v ) )
RHS_freqs <- apply(RHS, 1, countN )
RHS_means <- data.frame(word = names(RHS_means),
mean_dist = unname(RHS_means),
freq = unname(RHS_freqs))
# sort by mean distance
RHS_means <- RHS_means[with(RHS_means, order(-mean_dist)), ]
# sort by frequency
RHS_means <- RHS_means[with(RHS_means, order(-freq)), ]
# compute mutual information for all words in the span
# this isn't quite right...
MI <- vector(length = length(uniques))
for(i in 1:length(uniques)){
# A = frequency of node word
A <- length(grep(word, unlist(strsplit(examp1, split=" "))))
# B = frequency of collocate
B <- length(grep(uniques[i], unlist(strsplit(examp1, split=" "))))
# size of corpus = number of words in total
sizeCorpus <- length(unlist(strsplit(examp1, split=" ")))
# span = span of words analysed to L and R of node word
span <- span
# compute MI
MI[i] <- log ( (A * B * sizeCorpus) / (A * B * span) ) / log (2)
}
# how to specify minimum collocate frequency? Only ones
# that occur at least twice?
# how to get some kind of statistic for each collocate? MI?
# antconc uses
# M. Stubbs, Collocations and Semantic Profiles, Functions of Language 2, 1 (1995)
# MI: http://corpus.byu.edu/mutualInformation.asp
#################
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment