Skip to content

Instantly share code, notes, and snippets.

Created April 26, 2016 21:03
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save anonymous/a07343d2d2a9f356c3787e18c88e7584 to your computer and use it in GitHub Desktop.
Save anonymous/a07343d2d2a9f356c3787e18c88e7584 to your computer and use it in GitHub Desktop.
Voldemort BnP error log
Voldemort version detected: 1.10.15
Executing BnP with:
config_file : /voldemort/bap.job
hadoop_config_path : /etc/hadoop/conf/
16/04/26 20:36:06 INFO azkaban.VoldemortBuildAndPushJobRunner: Extracting config properties out of: /voldemort/bap.job
16/04/26 20:36:06 INFO shell-job: Job props:
{
avro.key.field: key
avro.serializer.versioned: true
avro.value.field: value
azkaban.should.proxy: true
build.input.path: /facebook_first_level.out/
build.output.dir: /tmp/
build.type.avro: true
enable.store.creation: true
push.cluster: tcp://node0:6666
push.store.description: "Testing avro build and push"
push.store.name: test2
push.store.owners: myemail@myworkplace.com
user.to.proxy: hadoop
}
16/04/26 20:36:06 INFO client.AbstractStoreClientFactory: Client zone-id [-1] Attempting to get raw store [voldsys$_metadata_version_persistence]
16/04/26 20:36:06 INFO client.AbstractStoreClientFactory: Client zone-id [-1] Attempting to get raw store [voldsys$_store_quotas]
16/04/26 20:36:06 INFO shell-job: voldemort.fetcher.protocol is set to : webhdfs
16/04/26 20:36:06 INFO shell-job: voldemort.fetcher.port is set to : 50070
16/04/26 20:36:06 INFO shell-job: Build and Push Job constructed for 1 cluster(s).
16/04/26 20:36:06 INFO shell-job: Requesting block-level compression codec expected by Server
16/04/26 20:36:07 INFO shell-job: Server responded with block-level compression codecs: [ NO_CODEC ]
16/04/26 20:36:07 INFO shell-job: Using no block-level compression
16/04/26 20:36:09 INFO shell-job: Verifying store against cluster URL: tcp://node0:6666
<store>
<name>test2</name>
<persistence>read-only</persistence>
<description>"Testing avro build and push"</description>
<owners>myemail@myworkplace.com</owners>
<routing>client</routing>
<replication-factor>2</replication-factor>
<required-reads>1</required-reads>
<required-writes>1</required-writes>
<key-serializer>
<type>avro-generic-versioned</type>
<schema-info version="0">"int"</schema-info>
</key-serializer>
<value-serializer>
<type>avro-generic-versioned</type>
<schema-info version="0">"string"</schema-info>
</value-serializer>
</store>
16/04/26 20:36:09 INFO utils.StoreDefinitionUtils: Validating schema for store: test2
16/04/26 20:36:09 INFO admin.AdminClient: Adding store test2 on Node 172.16.238.13:6666 [id 0]
16/04/26 20:36:09 INFO admin.AdminClient: Successfully added test2 on Node 172.16.238.13:6666 [id 0]
16/04/26 20:36:09 INFO admin.AdminClient: Adding store test2 on Node 172.16.238.14:6669 [id 1]
16/04/26 20:36:09 INFO admin.AdminClient: Successfully added test2 on Node 172.16.238.14:6669 [id 1]
16/04/26 20:36:09 INFO admin.AdminClient: Adding store test2 on Node 172.16.238.15:6672 [id 2]
16/04/26 20:36:09 INFO admin.AdminClient: Successfully added test2 on Node 172.16.238.15:6672 [id 2]
16/04/26 20:36:09 INFO mr.HadoopStoreBuilder: HadoopStoreBuilder constructed with numChunksOverride <= 0, thus relying chunk size.
16/04/26 20:36:09 INFO utils.HadoopUtils: findContainingJar finds url:file:/tmp/hadoop-unjar6302159187053892224/voldemort/store/readonly/mr/HadoopStoreBuilder.class
16/04/26 20:36:09 INFO utils.HadoopUtils: findContainingJar finds url:jar:file:/voldemort/build/libs/voldemort-1.10.15-bnp.jar!/voldemort/store/readonly/mr/HadoopStoreBuilder.class
16/04/26 20:36:09 INFO azkaban.AbstractHadoopJob: Setting hadoop jar file for class:class voldemort.store.readonly.mr.HadoopStoreBuilder to /voldemort/build/libs/voldemort-1.10.15-bnp.jar
16/04/26 20:36:09 INFO azkaban.AbstractHadoopJob: *************************************************************************
16/04/26 20:36:09 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
16/04/26 20:36:09 INFO azkaban.AbstractHadoopJob: Running on Real Hadoop Cluster(local)
16/04/26 20:36:09 INFO azkaban.AbstractHadoopJob: *************************************************************************
16/04/26 20:36:09 INFO Configuration.deprecation: dfs.umaskmode is deprecated. Instead, use fs.permissions.umask-mode
16/04/26 20:36:09 INFO mr.HadoopStoreBuilder: Data size = 865719, replication factor = 2, numNodes = 3, numPartitions = 9, chunk size = 1073741824
16/04/26 20:36:09 INFO mr.HadoopStoreBuilder: Number of chunks: 1, number of reducers: 9, save keys: true, reducerPerBucket: true, buildPrimaryReplicasOnly: true
16/04/26 20:36:09 INFO mr.HadoopStoreBuilder: Building store...
16/04/26 20:36:10 INFO client.RMProxy: Connecting to ResourceManager at /172.16.238.10:8032
16/04/26 20:36:10 INFO client.RMProxy: Connecting to ResourceManager at /172.16.238.10:8032
16/04/26 20:36:10 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
16/04/26 20:36:10 INFO mapred.FileInputFormat: Total input paths to process : 1
16/04/26 20:36:10 INFO mapreduce.JobSubmitter: number of splits:2
16/04/26 20:36:10 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1461702789991_0002
16/04/26 20:36:11 INFO impl.YarnClientImpl: Submitted application application_1461702789991_0002
16/04/26 20:36:11 INFO mapreduce.Job: The url to track the job: http://hadoop-master:8088/proxy/application_1461702789991_0002/
16/04/26 20:36:11 INFO mapreduce.Job: Running job: job_1461702789991_0002
16/04/26 20:36:17 INFO mapreduce.Job: Job job_1461702789991_0002 running in uber mode : false
16/04/26 20:36:17 INFO mapreduce.Job: map 0% reduce 0%
16/04/26 20:36:23 INFO mapreduce.Job: map 100% reduce 0%
16/04/26 20:36:36 INFO mapreduce.Job: map 100% reduce 11%
16/04/26 20:36:37 INFO mapreduce.Job: map 100% reduce 22%
16/04/26 20:36:40 INFO mapreduce.Job: map 100% reduce 33%
16/04/26 20:36:41 INFO mapreduce.Job: map 100% reduce 56%
16/04/26 20:36:42 INFO mapreduce.Job: map 100% reduce 78%
16/04/26 20:36:43 INFO mapreduce.Job: map 100% reduce 100%
16/04/26 20:36:44 INFO mapreduce.Job: Job job_1461702789991_0002 completed successfully
16/04/26 20:36:44 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=997000
FILE: Number of bytes written=3299082
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=1014597
HDFS: Number of bytes written=965693
HDFS: Number of read operations=71
HDFS: Number of large read operations=0
HDFS: Number of write operations=126
Job Counters
Launched map tasks=2
Launched reduce tasks=9
Data-local map tasks=2
Total time spent by all maps in occupied slots (ms)=8325
Total time spent by all reduces in occupied slots (ms)=122764
Total time spent by all map tasks (ms)=8325
Total time spent by all reduce tasks (ms)=122764
Total vcore-seconds taken by all map tasks=8325
Total vcore-seconds taken by all reduce tasks=122764
Total megabyte-seconds taken by all map tasks=8524800
Total megabyte-seconds taken by all reduce tasks=125710336
Map-Reduce Framework
Map input records=4039
Map output records=4039
Map output bytes=985480
Map output materialized bytes=997054
Input split bytes=240
Combine input records=0
Combine output records=0
Reduce input groups=4039
Reduce shuffle bytes=997054
Reduce input records=4039
Reduce output records=0
Spilled Records=8078
Shuffled Maps =18
Failed Shuffles=0
Merged Map outputs=18
GC time elapsed (ms)=1868
CPU time spent (ms)=23590
Physical memory (bytes) snapshot=2631782400
Virtual memory (bytes) snapshot=19249180672
Total committed heap usage (bytes)=2666528768
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1014357
File Output Format Counters
Bytes Written=2115
16/04/26 20:36:44 INFO mr.HadoopStoreBuilder: Number of collisions in the job - 0
16/04/26 20:36:44 INFO mr.HadoopStoreBuilder: Maximum number of collisions for one entry - 0
16/04/26 20:36:44 INFO mr.HadoopStoreBuilder: partition-0: Checksum = ea6efbd6738c53cde18808d834a0754e, Size = 100 KB
16/04/26 20:36:44 INFO mr.HadoopStoreBuilder: partition-1: Checksum = fc9b54c5bf0c6ea9c74f96449e99ecaa, Size = 104 KB
16/04/26 20:36:44 INFO mr.HadoopStoreBuilder: partition-2: Checksum = 7a3e1eabb319e39a8c94e809bf8a5975, Size = 102 KB
16/04/26 20:36:44 INFO mr.HadoopStoreBuilder: partition-3: Checksum = a72bacaafd1d04f4a43aa859155fc278, Size = 120 KB
16/04/26 20:36:44 INFO mr.HadoopStoreBuilder: partition-4: Checksum = 5888d5772f70f66f1a194c01f4df48b6, Size = 102 KB
16/04/26 20:36:45 INFO mr.HadoopStoreBuilder: partition-5: Checksum = d2c7b8e6817a174f753e7911edc03f9e, Size = 108 KB
16/04/26 20:36:45 INFO mr.HadoopStoreBuilder: partition-6: Checksum = 43231ab6eb09319533e15a0dcf23e5ef, Size = 101 KB
16/04/26 20:36:45 INFO mr.HadoopStoreBuilder: partition-7: Checksum = 76725a62104dd58a227d936226bc5ac4, Size = 99 KB
16/04/26 20:36:45 INFO mr.HadoopStoreBuilder: partition-8: Checksum = 48e18cb79e9a1e622513022f36fd42a5, Size = 100 KB
16/04/26 20:36:45 INFO mr.HadoopStoreBuilder: Setting permission to 755 for /tmp/node0/full-store.metadata
16/04/26 20:36:45 INFO shell-job: Pushing to cluster URL: tcp://node0:6666
16/04/26 20:36:45 INFO shell-job: StorePushTask.call() invoked for cluster URL: tcp://node0:6666
16/04/26 20:36:45 WARN shell-job: pushHighAvailability is DISABLED on cluster: tcp://node0:6666
16/04/26 20:36:45 INFO shell-job: Push starting for cluster: tcp://node0:6666
16/04/26 20:36:45 INFO utils.VoldemortUtils: Existing protocol = hdfs and port = 8020
16/04/26 20:36:45 INFO utils.VoldemortUtils: New protocol = webhdfs and port = 50070
16/04/26 20:36:45 INFO client.AbstractStoreClientFactory: Client zone-id [-1] Attempting to get raw store [voldsys$_metadata_version_persistence]
16/04/26 20:36:45 INFO client.AbstractStoreClientFactory: Client zone-id [-1] Attempting to get raw store [voldsys$_store_quotas]
16/04/26 20:36:45 INFO swapper.AdminStoreSwapper|tcp://node0:6666: tcp://node0:6666 : Initiating swap of test2 with dataDir: /tmp/node0
16/04/26 20:36:45 INFO swapper.AdminStoreSwapper|tcp://node0:6666: tcp://node0:6666 : Invoking fetch for Node 172.16.238.13:6666 [id 0] for webhdfs://172.16.238.10:50070/tmp/node0
16/04/26 20:36:45 INFO swapper.AdminStoreSwapper|tcp://node0:6666: tcp://node0:6666 : Invoking fetch for Node 172.16.238.14:6669 [id 1] for webhdfs://172.16.238.10:50070/tmp/node0
16/04/26 20:36:45 INFO swapper.AdminStoreSwapper|tcp://node0:6666: tcp://node0:6666 : Invoking fetch for Node 172.16.238.15:6672 [id 2] for webhdfs://172.16.238.10:50070/tmp/node0
16/04/26 20:36:45 ERROR swapper.AdminStoreSwapper|tcp://node0:6666: tcp://node0:6666 : Error on Node 172.16.238.13:6666 [id 0] during push :
java.util.concurrent.ExecutionException: voldemort.VoldemortException: Fetch url webhdfs://172.16.238.10:50070/tmp/node0 is not readable
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:188)
at voldemort.store.readonly.swapper.AdminStoreSwapper.invokeFetch(AdminStoreSwapper.java:220)
at voldemort.store.readonly.swapper.AdminStoreSwapper.swapStoreData(AdminStoreSwapper.java:123)
at voldemort.store.readonly.mr.azkaban.VoldemortSwapJob.run(VoldemortSwapJob.java:157)
at voldemort.store.readonly.mr.azkaban.VoldemortBuildAndPushJob.runPushStore(VoldemortBuildAndPushJob.java:761)
at voldemort.store.readonly.mr.azkaban.VoldemortBuildAndPushJob$StorePushTask.call(VoldemortBuildAndPushJob.java:511)
at voldemort.store.readonly.mr.azkaban.VoldemortBuildAndPushJob$StorePushTask.call(VoldemortBuildAndPushJob.java:494)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: voldemort.VoldemortException: Fetch url webhdfs://172.16.238.10:50070/tmp/node0 is not readable
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at voldemort.utils.ReflectUtils.callConstructor(ReflectUtils.java:116)
at voldemort.utils.ReflectUtils.callConstructor(ReflectUtils.java:103)
at voldemort.store.ErrorCodeMapper.getError(ErrorCodeMapper.java:83)
at voldemort.client.protocol.admin.AdminClient$HelperOperations.throwException(AdminClient.java:438)
at voldemort.client.protocol.admin.AdminClient$RPCOperations.getAsyncRequestStatus(AdminClient.java:683)
at voldemort.client.protocol.admin.AdminClient$RPCOperations.waitForCompletion(AdminClient.java:888)
at voldemort.client.protocol.admin.AdminClient$RPCOperations.waitForCompletion(AdminClient.java:959)
at voldemort.client.protocol.admin.AdminClient$ReadOnlySpecificOperations.fetchStore(AdminClient.java:4132)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment