Skip to content

Instantly share code, notes, and snippets.

View cdwillie76's full-sized avatar

Chris Williams cdwillie76

  • 18:12 (UTC -04:00)
View GitHub Profile
@cdwillie76
cdwillie76 / information.txt
Last active December 15, 2015 07:59
Information about logstash null pointer exception when doing the standalone demo
OSX 10.6.8
java -version
java version "1.6.0_43"
Java(TM) SE Runtime Environment (build 1.6.0_43-b01-447-10M4203)
Java HotSpot(TM) 64-Bit Server VM (build 20.14-b01-447, mixed mode)
java -jar logstash-1.1.9-monolithic.jar agent -f logstash-simple.conf -- web --backend elasticsearch:///?local
test
module("Pub");
var pubFixture = new Pub();
test("key getter", function() {
pubFixture.point = {lat: 1234, long: 5678};
equals(pubFixture.key, "1234,5678", "Key property did not return comma delimited version of Point");
});
module("Crawl");
10/03/28 15:58:23 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
10/03/28 15:58:26 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
10/03/28 15:58:26 INFO input.FileInputFormat: Total input paths to process : 1
10/03/28 15:58:27 INFO mapred.JobClient: Running job: job_local_0001
10/03/28 15:58:27 INFO input.FileInputFormat: Total input paths to process : 1
10/03/28 15:58:28 INFO mapred.JobClient: map 0% reduce 0%
10/03/28 15:58:29 INFO mapred.MapTask: io.sort.mb = 100
10/03/28 15:58:39 INFO mapred.MapTask: data buffer = 79691776/99614720
10/03/28 15:58:39 INFO mapred.MapTask: record buffer = 262144/327680
10/03/28 15:58:43 INFO mapred.LocalJobRunner:
@cdwillie76
cdwillie76 / Hadoop_WordCount_v0.20.1.java
Created March 28, 2010 19:32
Converted the WordCount example to be v0.20.x compliant. For some reason the reduce isn't being called, just the map is called. Any ideas?
package hadoop.examples;
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;