Skip to content

Instantly share code, notes, and snippets.

@xeoncross
Created March 19, 2014 19:01
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save xeoncross/9648844 to your computer and use it in GitHub Desktop.
Save xeoncross/9648844 to your computer and use it in GitHub Desktop.
large scalle node.js website scrapping
2.9M Websites Downloaded -> 16.3 Minutes
Patrick McConlogue
Votes by Christian Hochfilzer, Nervan Andrew, Tek Neeque, and Grahesh Parkar.
Per my last post I discussed how to scrape at scale. While there is a tricky memory leak in this, here is the script (for "ring".com) to download and get the pagerank for almost three million websites based on a maximum of 500000 queries.
http://thnkr.quora.com/2-9M-Websites-Downloaded-16-3-Minutes
NOTES:
Essentially, whatever you put into the "list.txt" file will be scrape and the top 10 results will be downloaded in JSON.
Notice it is using parallel buffers in the read stream, this is intentional so that it can purposely cause a memory overload and catch it.
I am not responsible for what you do with this, I built it for local stress testing.
INSTALLATION:
1. node.js -v 10.2 (just has to be > 8).
2. npm -g install node.io
3. Increase your java heap size on your machine to a minimum of 7 GBs (you need a minimum of 8gb of ram on the machine).
4. Open your file limit on your machine by: sudo ulimit -n 999999
5. chmod of the entire directory to 776 for faster write (this one is critical).
6. Use the attached script and include -b for nodeio to provide past crawl stats.
7. Make sure you add the "--spoof" flag to node.io because I did not build the agent rotator in.
8. Visit node.io for documentation.
/*/ CONCEPTUAL ANALYSES /*/
var nodeio = require ('node.io'), options = { wait: 0.20, max: 50, timeout: 15, read_buffer: 800 };
var root = 'http://www.ring.com/search?q=';
var total_urls = 0;
var util = require('util');
var total_queries = 0;
var fs = require('fs');
var stream = fs.createReadStream('list.txt', { flags: 'r',
encoding: null,
fd: null,
mode: 0666,
bufferSize: 64 * 8880000,
autoClose: true
});
status = '[STARTING]'; // TODO: EMIT TO MASTER NODE.
console.log(status);
process.on('uncaughtException', function(err) {
process.stdout.write('WARNING: Approaching buffer...' + util.inspect(process.memoryUsage()));
});
exports.job = new nodeio.Job(options, {
input: function() {
this.inputStream(stream);
this.input.apply(this, arguments);
},
output: 'output.json',
run: function (keyword) {
var that = this;
var status = '[EMIT]';
this.getHtml(root + encodeURIComponent(keyword).split(' ').join('+'), function (err, $, data, headers, callback) {
try {
var pagerank = 0;
total_urls = total_urls + 10;
$('h3 a').each('href', function(a) {
that.get(a, function(err, data) {
pagerank++;
process.stdout.write(status + ' '
+ pagerank
+' <- BOT:2.41_RING -> '
+ total_urls + ' [ '
+ util.inspect(process.memoryUsage()) + ' ]\r', function() {
that.emit({ 'QUERY': keyword,
'timestamp': new Date().getTime(),
'series_loc': total_urls,
'results': [{'url': a,
'pagerank': pagerank,
'website': data
}]});
});
});
});
} catch (err) {
this.skip();
}
});
}
});
fs.open("output.json", 'a', 0666, function(err, fd){
fs.write(fd, "\n]", null, undefined, function (err, written) {
console.log('Streams opened: ' + written);
});
});
@xeoncross
Copy link
Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment