Skip to content

Instantly share code, notes, and snippets.

@pierre
Created September 9, 2011 22:14
Show Gist options
  • Save pierre/1207473 to your computer and use it in GitHub Desktop.
Save pierre/1207473 to your computer and use it in GitHub Desktop.
dot stuff
Simple:
digraph G {
rankdir = BT
node [shape = "box"]
HDFS -> Event -> Schema
}
More complex:
digraph G {
subgraph SequenceFile {
rankdir = BT
"file" [
label = "Header|<f1>Record|Record|Sync|Record|Record|Record|Sync|Record"
shape = "record"
];
"record" [
label = "<f1>Key length (5)|Serialized event length|TBooleanWritable(true)|Event#getData()"
shape = "record"
];
"recordcompressed" [
label = "Number of pairs (uncompressed)|Size of the keys (compressed)|Keys (compressed)|Size of the events (compressed)|Events (compressed)"
shape = "record"
];
file:f1 -> record:f1 [arrowhead = dot]
record -> recordcompressed
}
}
With style:
digraph g {
"tevent_writer" [
label = "{ThresholdEventWriter| |<f1>write (Event) : void\l|<f2>commit (event) : void\l}"
shape = "record"
];
"disk_writer" [
label = "{DiskSpoolEventWriter| |<f1>write (Event) : void\l|<f2>commit (event) : void\l|<f3>flushToPersistentWriter () : void\l}"
shape = "record"
];
"hadoop_writer" [
label = "{HadoopFileEventWriter| |<f1>write (Event) : void\l|<f2>commit (event) : void\l|<f3>forceCommit () : void\l}"
shape = "record"
];
disk_writer:f3 -> hadoop_writer:f1 [arrowhead = dot, label = "ScheduledExecutorService (corePoolSize = maximumPoolSize = 2)"]
disk_writer:f3 -> hadoop_writer:f3 [arrowhead = dot]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment