Skip to content

Instantly share code, notes, and snippets.

@codegordi
Forked from abelsonlive/srapeshell.R
Created September 27, 2012 16:53
Show Gist options
  • Save codegordi/3795092 to your computer and use it in GitHub Desktop.
Save codegordi/3795092 to your computer and use it in GitHub Desktop.
# best practices for web scraping in R // ldply
# best practices for web scraping in R #
# function should be used with ldply
# eg:
ldply(urls, scrape)
# add a try to ignore broken links/ unresponsive pages
# eg:
ldply(urls, function(url){
out = try(scrape(url))
if(class(out)=='try-error') next;
return(out)
})
# insert some random sleep interval to prevent getting booted
# eg:
ldply(urls, function(url){
out = try(scrape(url))
if(class(out)=='try-error') next;
Sys.sleep(sample(seq(1, 3, by=0.001), 1))
return(out)
})
scrape <- function(url)
{
if(!require('XML')){
install.packages('XML')
library('XML')
}
if(!require('RCurl')){
install.packages('RCurl')
library('RCurl')
}
if(!require('plyr')){
install.packages('plyr')
library('plyr')
}
if(!require('stringr')){
install.packages('stringr')
library('stringr')
}
df = data.frame(url=url, stringsAsFactors=F)
#download page, use "readLines" if "getURL" fails
html = try(getURL(df$url))
if(class(html)=='try-error'){
html = readLines(df$url, warn=F)
}
tree = htmlTreeParse(html, useInternalNodes=T)
#@@@@@@@@@@@@@@@@@@@@#
| |
| ENTER XPATH HERE: |
| |
#$$$$$$$$$$$$$$$$$$$$#
return(data.frame(df, stringsAsFactors=F))
}
@codegordi
Copy link
Author

Remove curly end brackets on line 59 & 60 e.g. when input specific xpath for your page set.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment