Created
July 5, 2012 05:44
-
-
Save marblejenka/3051583 to your computer and use it in GitHub Desktop.
Yarn Child Hangover forever
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
+ hadoop jar /usr/local/share/hadoop-2.0.0-cdh4.0.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.0.0-cdh4.0.0.jar terasort input output | |
12/07/04 22:26:44 INFO terasort.TeraSort: starting | |
12/07/04 22:26:44 INFO input.FileInputFormat: Total input paths to process : 16 | |
Spent 167ms computing base-splits. | |
Spent 3ms computing TeraScheduler splits. | |
Computing input splits took 171ms | |
Sampling 10 splits of 192 | |
12/07/04 22:26:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable | |
Making 8 from 100000 sampled records | |
Computing parititions took 386ms | |
Spent 559ms computing partitions. | |
12/07/04 22:26:45 INFO mapreduce.JobSubmitter: number of splits:192 | |
12/07/04 22:26:45 WARN conf.Configuration: mapred.jar is deprecated. Instead, use mapreduce.job.jar | |
12/07/04 22:26:45 WARN conf.Configuration: mapred.create.symlink is deprecated. Instead, use mapreduce.job.cache.symlink.create | |
12/07/04 22:26:45 WARN conf.Configuration: mapred.cache.files is deprecated. Instead, use mapreduce.job.cache.files | |
12/07/04 22:26:45 WARN conf.Configuration: mapreduce.partitioner.class is deprecated. Instead, use mapreduce.job.partitioner.class | |
12/07/04 22:26:45 WARN conf.Configuration: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class | |
12/07/04 22:26:45 WARN conf.Configuration: mapred.job.name is deprecated. Instead, use mapreduce.job.name | |
12/07/04 22:26:45 WARN conf.Configuration: mapreduce.inputformat.class is deprecated. Instead, use mapreduce.job.inputformat.class | |
12/07/04 22:26:45 WARN conf.Configuration: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir | |
12/07/04 22:26:45 WARN conf.Configuration: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir | |
12/07/04 22:26:45 WARN conf.Configuration: mapreduce.outputformat.class is deprecated. Instead, use mapreduce.job.outputformat.class | |
12/07/04 22:26:45 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps | |
12/07/04 22:26:45 WARN conf.Configuration: mapred.cache.files.timestamps is deprecated. Instead, use mapreduce.job.cache.files.timestamps | |
12/07/04 22:26:45 WARN conf.Configuration: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class | |
12/07/04 22:26:45 WARN conf.Configuration: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir | |
12/07/04 22:26:45 INFO mapred.ResourceMgrDelegate: Submitted application application_1341396283952_0010 to ResourceManager at fc5/192.168.201.206:8040 | |
12/07/04 22:26:45 INFO mapreduce.Job: The url to track the job: http://fc5:8088/proxy/application_1341396283952_0010/ | |
12/07/04 22:26:45 INFO mapreduce.Job: Running job: job_1341396283952_0010 | |
12/07/04 22:26:52 INFO mapreduce.Job: Job job_1341396283952_0010 running in uber mode : false | |
12/07/04 22:26:52 INFO mapreduce.Job: map 0% reduce 0% | |
12/07/04 22:27:06 INFO mapreduce.Job: map 1% reduce 0% | |
12/07/04 22:27:30 INFO mapreduce.Job: map 2% reduce 0% | |
12/07/04 22:27:39 INFO mapreduce.Job: map 3% reduce 0% | |
12/07/04 22:28:04 INFO mapreduce.Job: map 4% reduce 0% | |
12/07/04 22:28:15 INFO mapreduce.Job: map 5% reduce 0% | |
12/07/04 22:28:38 INFO mapreduce.Job: map 6% reduce 0% | |
12/07/04 22:28:49 INFO mapreduce.Job: map 7% reduce 0% | |
12/07/04 22:29:09 INFO mapreduce.Job: map 8% reduce 0% | |
12/07/04 22:29:14 INFO mapreduce.Job: map 9% reduce 0% | |
12/07/04 22:29:34 INFO mapreduce.Job: map 10% reduce 0% | |
12/07/04 22:29:50 INFO mapreduce.Job: map 11% reduce 0% | |
12/07/04 22:30:06 INFO mapreduce.Job: map 12% reduce 0% | |
12/07/04 22:30:24 INFO mapreduce.Job: map 13% reduce 0% | |
12/07/04 22:30:35 INFO mapreduce.Job: map 14% reduce 0% | |
12/07/04 22:30:56 INFO mapreduce.Job: map 15% reduce 0% | |
12/07/04 22:31:06 INFO mapreduce.Job: map 16% reduce 0% | |
12/07/04 22:31:20 INFO mapreduce.Job: map 17% reduce 0% | |
12/07/04 22:31:38 INFO mapreduce.Job: map 18% reduce 0% | |
12/07/04 22:31:49 INFO mapreduce.Job: map 19% reduce 0% | |
12/07/04 22:32:10 INFO mapreduce.Job: map 20% reduce 0% | |
12/07/04 22:32:23 INFO mapreduce.Job: map 21% reduce 0% | |
12/07/04 22:32:48 INFO mapreduce.Job: map 22% reduce 0% | |
12/07/04 22:32:55 INFO mapreduce.Job: map 23% reduce 0% | |
12/07/04 22:33:20 INFO mapreduce.Job: map 24% reduce 0% | |
12/07/04 22:33:29 INFO mapreduce.Job: map 25% reduce 0% | |
12/07/04 22:33:46 INFO mapreduce.Job: map 25% reduce 1% | |
12/07/04 22:33:56 INFO mapreduce.Job: map 26% reduce 1% | |
12/07/04 22:34:23 INFO mapreduce.Job: map 27% reduce 1% | |
12/07/04 22:34:31 INFO mapreduce.Job: map 28% reduce 1% | |
12/07/04 22:35:00 INFO mapreduce.Job: map 29% reduce 1% | |
12/07/04 22:35:23 INFO mapreduce.Job: map 30% reduce 1% | |
12/07/04 22:35:35 INFO mapreduce.Job: map 31% reduce 1% | |
12/07/04 22:36:00 INFO mapreduce.Job: map 32% reduce 1% | |
12/07/04 22:36:28 INFO mapreduce.Job: map 33% reduce 1% | |
12/07/04 22:36:37 INFO mapreduce.Job: map 34% reduce 1% | |
12/07/04 22:37:04 INFO mapreduce.Job: map 35% reduce 1% | |
12/07/04 22:37:27 INFO mapreduce.Job: map 36% reduce 1% | |
12/07/04 22:37:40 INFO mapreduce.Job: map 37% reduce 1% | |
12/07/04 22:38:04 INFO mapreduce.Job: map 38% reduce 1% | |
12/07/04 22:38:12 INFO mapreduce.Job: map 39% reduce 1% | |
12/07/04 22:38:43 INFO mapreduce.Job: map 40% reduce 1% | |
12/07/04 22:39:08 INFO mapreduce.Job: map 41% reduce 1% | |
12/07/04 22:39:17 INFO mapreduce.Job: map 42% reduce 1% | |
12/07/04 22:39:42 INFO mapreduce.Job: map 43% reduce 1% | |
12/07/04 22:40:07 INFO mapreduce.Job: map 44% reduce 1% | |
12/07/04 22:40:20 INFO mapreduce.Job: map 45% reduce 1% | |
12/07/04 22:40:46 INFO mapreduce.Job: map 46% reduce 1% | |
12/07/04 22:40:49 INFO mapreduce.Job: map 46% reduce 2% | |
12/07/04 22:41:10 INFO mapreduce.Job: map 46% reduce 3% | |
12/07/04 22:41:18 INFO mapreduce.Job: map 47% reduce 3% | |
12/07/04 22:41:50 INFO mapreduce.Job: map 48% reduce 3% | |
12/07/04 22:42:16 INFO mapreduce.Job: map 48% reduce 4% | |
12/07/04 22:42:22 INFO mapreduce.Job: map 49% reduce 4% | |
12/07/04 22:42:51 INFO mapreduce.Job: map 50% reduce 4% | |
12/07/04 22:43:23 INFO mapreduce.Job: map 51% reduce 4% | |
12/07/04 22:43:55 INFO mapreduce.Job: map 52% reduce 4% | |
12/07/04 22:44:26 INFO mapreduce.Job: map 53% reduce 4% | |
12/07/04 22:44:59 INFO mapreduce.Job: map 54% reduce 4% | |
12/07/04 22:45:27 INFO mapreduce.Job: map 55% reduce 4% | |
12/07/04 22:45:59 INFO mapreduce.Job: map 56% reduce 4% | |
12/07/04 22:46:30 INFO mapreduce.Job: map 57% reduce 4% | |
12/07/04 22:47:02 INFO mapreduce.Job: map 58% reduce 4% | |
12/07/04 22:47:17 INFO mapreduce.Job: map 59% reduce 4% | |
12/07/04 22:47:50 INFO mapreduce.Job: map 60% reduce 4% | |
12/07/04 22:48:13 INFO mapreduce.Job: map 60% reduce 5% | |
12/07/04 22:48:20 INFO mapreduce.Job: map 61% reduce 5% | |
12/07/04 22:48:51 INFO mapreduce.Job: map 62% reduce 5% | |
12/07/04 22:49:22 INFO mapreduce.Job: map 63% reduce 5% | |
12/07/04 22:49:54 INFO mapreduce.Job: map 64% reduce 5% | |
12/07/04 22:50:25 INFO mapreduce.Job: map 65% reduce 5% | |
12/07/04 22:50:56 INFO mapreduce.Job: map 66% reduce 5% | |
12/07/04 22:51:25 INFO mapreduce.Job: map 67% reduce 5% | |
12/07/04 22:51:57 INFO mapreduce.Job: map 68% reduce 5% | |
12/07/04 22:52:30 INFO mapreduce.Job: map 69% reduce 5% | |
12/07/04 22:53:01 INFO mapreduce.Job: map 70% reduce 5% | |
12/07/04 22:53:32 INFO mapreduce.Job: map 71% reduce 5% | |
12/07/04 22:54:04 INFO mapreduce.Job: map 72% reduce 5% | |
12/07/04 22:54:29 INFO mapreduce.Job: map 72% reduce 6% | |
12/07/04 22:54:36 INFO mapreduce.Job: map 73% reduce 6% | |
12/07/04 22:55:07 INFO mapreduce.Job: map 74% reduce 6% | |
12/07/04 22:55:33 INFO mapreduce.Job: map 75% reduce 6% | |
12/07/04 22:56:04 INFO mapreduce.Job: map 76% reduce 6% | |
12/07/04 22:56:36 INFO mapreduce.Job: map 77% reduce 6% | |
12/07/04 22:57:09 INFO mapreduce.Job: map 78% reduce 6% | |
12/07/04 22:57:41 INFO mapreduce.Job: map 79% reduce 6% | |
12/07/04 22:58:07 INFO mapreduce.Job: map 80% reduce 6% | |
12/07/04 22:58:39 INFO mapreduce.Job: map 81% reduce 6% | |
12/07/04 22:59:11 INFO mapreduce.Job: map 82% reduce 6% | |
12/07/04 22:59:42 INFO mapreduce.Job: map 83% reduce 6% | |
12/07/04 22:59:58 INFO mapreduce.Job: map 84% reduce 6% | |
12/07/04 23:00:23 INFO mapreduce.Job: map 84% reduce 7% | |
12/07/04 23:00:29 INFO mapreduce.Job: map 85% reduce 7% | |
12/07/04 23:01:01 INFO mapreduce.Job: map 86% reduce 7% | |
12/07/04 23:01:34 INFO mapreduce.Job: map 87% reduce 7% | |
12/07/04 23:02:07 INFO mapreduce.Job: map 88% reduce 7% | |
12/07/04 23:02:38 INFO mapreduce.Job: map 89% reduce 7% | |
12/07/04 23:03:10 INFO mapreduce.Job: map 90% reduce 7% | |
12/07/04 23:03:42 INFO mapreduce.Job: map 91% reduce 7% | |
12/07/04 23:04:03 INFO mapreduce.Job: map 92% reduce 7% | |
12/07/04 23:04:25 INFO mapreduce.Job: map 93% reduce 7% | |
12/07/04 23:04:46 INFO mapreduce.Job: map 94% reduce 7% | |
12/07/04 23:05:08 INFO mapreduce.Job: map 95% reduce 7% | |
12/07/04 23:05:27 INFO mapreduce.Job: map 96% reduce 7% | |
12/07/04 23:05:40 INFO mapreduce.Job: map 96% reduce 8% | |
12/07/04 23:05:48 INFO mapreduce.Job: map 97% reduce 8% | |
12/07/04 23:06:09 INFO mapreduce.Job: map 98% reduce 8% | |
12/07/04 23:06:31 INFO mapreduce.Job: map 99% reduce 8% | |
12/07/04 23:06:52 INFO mapreduce.Job: map 100% reduce 8% | |
12/07/04 23:06:53 INFO mapreduce.Job: map 100% reduce 13% | |
12/07/04 23:06:55 INFO mapreduce.Job: map 100% reduce 17% | |
12/07/04 23:07:01 INFO mapreduce.Job: map 100% reduce 18% | |
12/07/04 23:07:08 INFO mapreduce.Job: map 100% reduce 19% | |
12/07/04 23:07:16 INFO mapreduce.Job: map 100% reduce 20% | |
12/07/04 23:07:23 INFO mapreduce.Job: map 100% reduce 21% | |
12/07/04 23:07:34 INFO mapreduce.Job: map 100% reduce 22% | |
12/07/04 23:07:41 INFO mapreduce.Job: map 100% reduce 23% | |
12/07/04 23:07:49 INFO mapreduce.Job: map 100% reduce 24% | |
12/07/04 23:08:00 INFO mapreduce.Job: map 100% reduce 25% | |
12/07/04 23:08:08 INFO mapreduce.Job: map 100% reduce 26% | |
12/07/04 23:08:15 INFO mapreduce.Job: map 100% reduce 27% | |
12/07/04 23:08:23 INFO mapreduce.Job: map 100% reduce 28% | |
12/07/04 23:08:32 INFO mapreduce.Job: map 100% reduce 29% | |
12/07/04 23:08:39 INFO mapreduce.Job: map 100% reduce 30% | |
12/07/04 23:08:50 INFO mapreduce.Job: map 100% reduce 31% | |
12/07/04 23:08:54 INFO mapreduce.Job: map 100% reduce 35% | |
12/07/04 23:08:57 INFO mapreduce.Job: map 100% reduce 36% | |
12/07/04 23:09:03 INFO mapreduce.Job: map 100% reduce 41% | |
12/07/04 23:09:09 INFO mapreduce.Job: map 100% reduce 42% | |
12/07/04 23:09:18 INFO mapreduce.Job: map 100% reduce 43% | |
12/07/04 23:09:25 INFO mapreduce.Job: map 100% reduce 44% | |
12/07/04 23:09:31 INFO mapreduce.Job: map 100% reduce 45% | |
12/07/04 23:09:40 INFO mapreduce.Job: map 100% reduce 46% | |
12/07/04 23:09:46 INFO mapreduce.Job: map 100% reduce 47% | |
12/07/04 23:09:55 INFO mapreduce.Job: map 100% reduce 48% | |
12/07/04 23:10:01 INFO mapreduce.Job: map 100% reduce 49% | |
12/07/04 23:10:07 INFO mapreduce.Job: map 100% reduce 50% | |
12/07/04 23:10:16 INFO mapreduce.Job: map 100% reduce 51% | |
12/07/04 23:10:22 INFO mapreduce.Job: map 100% reduce 52% | |
12/07/04 23:10:30 INFO mapreduce.Job: map 100% reduce 53% | |
12/07/04 23:10:37 INFO mapreduce.Job: map 100% reduce 54% | |
12/07/04 23:10:45 INFO mapreduce.Job: map 100% reduce 55% | |
12/07/04 23:10:52 INFO mapreduce.Job: map 100% reduce 56% | |
12/07/04 23:10:58 INFO mapreduce.Job: map 100% reduce 57% | |
12/07/04 23:11:06 INFO mapreduce.Job: map 100% reduce 58% | |
12/07/04 23:11:13 INFO mapreduce.Job: map 100% reduce 59% | |
12/07/04 23:11:18 INFO mapreduce.Job: map 100% reduce 55% | |
12/07/04 23:11:25 INFO mapreduce.Job: map 100% reduce 56% | |
12/07/04 23:11:33 INFO mapreduce.Job: map 100% reduce 57% | |
12/07/04 23:11:41 INFO mapreduce.Job: map 100% reduce 58% | |
12/07/04 23:11:50 INFO mapreduce.Job: map 100% reduce 59% | |
12/07/04 23:11:51 INFO mapreduce.Job: map 100% reduce 54% | |
12/07/04 23:11:52 INFO mapreduce.Job: map 100% reduce 55% | |
12/07/04 23:11:59 INFO mapreduce.Job: map 100% reduce 56% | |
12/07/04 23:12:04 INFO mapreduce.Job: map 100% reduce 57% | |
12/07/04 23:12:15 INFO mapreduce.Job: map 100% reduce 58% | |
12/07/04 23:12:21 INFO mapreduce.Job: map 100% reduce 59% | |
12/07/04 23:12:32 INFO mapreduce.Job: map 100% reduce 60% | |
12/07/04 23:12:40 INFO mapreduce.Job: map 100% reduce 61% | |
12/07/04 23:12:46 INFO mapreduce.Job: map 100% reduce 62% | |
12/07/04 23:12:54 INFO mapreduce.Job: map 100% reduce 58% | |
12/07/04 23:13:01 INFO mapreduce.Job: map 100% reduce 54% | |
12/07/04 23:13:07 INFO mapreduce.Job: map 100% reduce 55% | |
12/07/04 23:13:13 INFO mapreduce.Job: map 100% reduce 56% | |
12/07/04 23:13:19 INFO mapreduce.Job: map 100% reduce 57% | |
12/07/04 23:13:38 INFO mapreduce.Job: map 100% reduce 58% | |
12/07/04 23:13:44 INFO mapreduce.Job: map 100% reduce 59% | |
12/07/04 23:13:49 INFO mapreduce.Job: Task Id : attempt_1341396283952_0010_m_000020_0, Status : FAILED | |
Container killed by the ApplicationMaster. | |
Too Many fetch failures.Failing the attempt | |
12/07/04 23:13:49 WARN mapreduce.Job: Error reading task output Server returned HTTP response code: 400 for URL: http://fc3:8080/tasklog?plaintext=true&attemptid=attempt_1341396283952_0010_m_000020_0&filter=stdout | |
12/07/04 23:13:49 WARN mapreduce.Job: Error reading task output Server returned HTTP response code: 400 for URL: http://fc3:8080/tasklog?plaintext=true&attemptid=attempt_1341396283952_0010_m_000020_0&filter=stderr | |
12/07/04 23:13:50 INFO mapreduce.Job: map 99% reduce 59% | |
12/07/04 23:13:51 INFO mapreduce.Job: map 99% reduce 58% | |
12/07/04 23:14:04 INFO mapreduce.Job: map 99% reduce 59% | |
12/07/04 23:14:08 INFO mapreduce.Job: map 99% reduce 60% | |
12/07/04 23:14:25 INFO mapreduce.Job: map 99% reduce 61% | |
12/07/04 23:14:27 INFO mapreduce.Job: map 100% reduce 61% | |
12/07/04 23:14:37 INFO mapreduce.Job: map 100% reduce 62% | |
12/07/04 23:14:38 INFO mapreduce.Job: map 100% reduce 66% | |
12/07/04 23:14:45 INFO mapreduce.Job: map 100% reduce 67% | |
12/07/04 23:14:56 INFO mapreduce.Job: map 100% reduce 68% | |
12/07/04 23:15:05 INFO mapreduce.Job: map 100% reduce 69% | |
12/07/04 23:15:20 INFO mapreduce.Job: map 100% reduce 70% | |
12/07/04 23:15:23 INFO mapreduce.Job: map 100% reduce 74% | |
12/07/04 23:15:26 INFO mapreduce.Job: map 100% reduce 75% | |
12/07/04 23:15:34 INFO mapreduce.Job: map 100% reduce 76% | |
12/07/04 23:15:47 INFO mapreduce.Job: map 100% reduce 77% | |
12/07/04 23:15:56 INFO mapreduce.Job: map 100% reduce 78% | |
12/07/04 23:16:09 INFO mapreduce.Job: map 100% reduce 79% | |
12/07/04 23:16:12 INFO mapreduce.Job: map 100% reduce 83% | |
12/07/04 23:16:16 INFO mapreduce.Job: map 100% reduce 84% | |
12/07/04 23:16:27 INFO mapreduce.Job: map 100% reduce 85% | |
12/07/04 23:16:37 INFO mapreduce.Job: map 100% reduce 86% | |
12/07/04 23:16:46 INFO mapreduce.Job: map 100% reduce 87% | |
12/07/04 23:16:58 INFO mapreduce.Job: map 100% reduce 88% | |
12/07/04 23:17:07 INFO mapreduce.Job: map 100% reduce 89% | |
12/07/04 23:17:18 INFO mapreduce.Job: map 100% reduce 90% | |
12/07/04 23:17:28 INFO mapreduce.Job: map 100% reduce 91% | |
12/07/04 23:17:37 INFO mapreduce.Job: map 100% reduce 92% | |
12/07/04 23:17:49 INFO mapreduce.Job: map 100% reduce 93% | |
12/07/04 23:18:00 INFO mapreduce.Job: map 100% reduce 94% | |
12/07/04 23:18:09 INFO mapreduce.Job: map 100% reduce 95% | |
12/07/04 23:18:19 INFO mapreduce.Job: map 100% reduce 96% | |
12/07/04 23:18:27 INFO mapreduce.Job: map 100% reduce 92% | |
12/07/04 23:18:32 INFO mapreduce.Job: map 100% reduce 93% | |
12/07/04 23:18:51 INFO mapreduce.Job: map 100% reduce 94% | |
12/07/04 23:19:07 INFO mapreduce.Job: map 100% reduce 95% | |
12/07/04 23:19:23 INFO mapreduce.Job: map 100% reduce 96% | |
12/07/04 23:19:41 INFO mapreduce.Job: map 100% reduce 97% | |
12/07/04 23:19:58 INFO mapreduce.Job: map 100% reduce 98% | |
12/07/04 23:20:02 INFO mapreduce.Job: map 100% reduce 94% | |
12/07/04 23:20:26 INFO mapreduce.Job: map 100% reduce 95% | |
12/07/04 23:20:42 INFO mapreduce.Job: map 100% reduce 91% |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2012-07-05 14:22:13,369 INFO [fetcher#2] org.apache.hadoop.mapreduce.task.reduce.Fetcher: for url=8080/mapOutput?job=job_1341396283952_0010&reduce=4&map=attempt_1341396283952_0010_m_000097_0,attempt_1341396283952_0010_m_000048_0,attempt_1341396283952_0010_m_000042_0,attempt_1341396283952_0010_m_000020_1,attempt_1341396283952_0010_m_000035_0,attempt_1341396283952_0010_m_000023_0,attempt_1341396283952_0010_m_000038_0,attempt_1341396283952_0010_m_000107_0,attempt_1341396283952_0010_m_000063_0,attempt_1341396283952_0010_m_000067_0,attempt_1341396283952_0010_m_000050_0 sent hash and receievd reply | |
2012-07-05 14:22:13,370 INFO [fetcher#2] org.apache.hadoop.mapreduce.task.reduce.Fetcher: fetcher#2 - MergerManager returned Status.WAIT ... | |
2012-07-05 14:22:13,370 INFO [fetcher#2] org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler: fc3:8080 freed by fetcher#2 in 2s | |
2012-07-05 14:22:13,370 INFO [fetcher#2] org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler: Assiging fc3:8080 with 11 to fetcher#2 | |
2012-07-05 14:22:13,370 INFO [fetcher#2] org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler: assigned 11 of 11 to fc3:8080 to fetcher#2 | |
2012-07-05 14:22:13,372 INFO [fetcher#2] org.apache.hadoop.mapreduce.task.reduce.Fetcher: for url=8080/mapOutput?job=job_1341396283952_0010&reduce=4&map=attempt_1341396283952_0010_m_000097_0,attempt_1341396283952_0010_m_000042_0,attempt_1341396283952_0010_m_000048_0,attempt_1341396283952_0010_m_000020_1,attempt_1341396283952_0010_m_000023_0,attempt_1341396283952_0010_m_000035_0,attempt_1341396283952_0010_m_000038_0,attempt_1341396283952_0010_m_000107_0,attempt_1341396283952_0010_m_000063_0,attempt_1341396283952_0010_m_000050_0,attempt_1341396283952_0010_m_000067_0 sent hash and receievd reply | |
2012-07-05 14:22:13,382 INFO [fetcher#2] org.apache.hadoop.mapreduce.task.reduce.Fetcher: fetcher#2 - MergerManager returned Status.WAIT ... | |
2012-07-05 14:22:13,382 INFO [fetcher#2] org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler: fc3:8080 freed by fetcher#2 in 12s | |
2012-07-05 14:22:13,382 INFO [fetcher#2] org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler: Assiging fc3:8080 with 11 to fetcher#2 | |
2012-07-05 14:22:13,383 INFO [fetcher#2] org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler: assigned 11 of 11 to fc3:8080 to fetcher#2 | |
2012-07-05 14:22:13,384 INFO [fetcher#2] org.apache.hadoop.mapreduce.task.reduce.Fetcher: for url=8080/mapOutput?job=job_1341396283952_0010&reduce=4&map=attempt_1341396283952_0010_m_000097_0,attempt_1341396283952_0010_m_000048_0,attempt_1341396283952_0010_m_000042_0,attempt_1341396283952_0010_m_000020_1,attempt_1341396283952_0010_m_000035_0,attempt_1341396283952_0010_m_000023_0,attempt_1341396283952_0010_m_000038_0,attempt_1341396283952_0010_m_000107_0,attempt_1341396283952_0010_m_000063_0,attempt_1341396283952_0010_m_000067_0,attempt_1341396283952_0010_m_000050_0 sent hash and receievd reply | |
2012-07-05 14:22:13,392 INFO [fetcher#2] org.apache.hadoop.mapreduce.task.reduce.Fetcher: fetcher#2 - MergerManager returned Status.WAIT ... | |
2012-07-05 14:22:13,393 INFO [fetcher#2] org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler: fc3:8080 freed by fetcher#2 in 10s | |
2012-07-05 14:22:13,393 INFO [fetcher#2] org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler: Assiging fc3:8080 with 11 to fetcher#2 | |
2012-07-05 14:22:13,393 INFO [fetcher#2] org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler: assigned 11 of 11 to fc3:8080 to fetcher#2 | |
2012-07-05 14:22:13,396 INFO [fetcher#2] org.apache.hadoop.mapreduce.task.reduce.Fetcher: for url=8080/mapOutput?job=job_1341396283952_0010&reduce=4&map=attempt_1341396283952_0010_m_000097_0,attempt_1341396283952_0010_m_000042_0,attempt_1341396283952_0010_m_000048_0,attempt_1341396283952_0010_m_000020_1,attempt_1341396283952_0010_m_000023_0,attempt_1341396283952_0010_m_000035_0,attempt_1341396283952_0010_m_000038_0,attempt_1341396283952_0010_m_000107_0,attempt_1341396283952_0010_m_000063_0,attempt_1341396283952_0010_m_000050_0,attempt_1341396283952_0010_m_000067_0 sent hash and receievd reply |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2012-07-05 14:40:04,918 ERROR org.apache.hadoop.mapred.ShuffleHandler: Shuffle error [id: 0x37ee5b0f, /192.168.201.202:54030 => /192.168.201.204:8080] EXCEPTION: java.io.IOException: Connection reset by peer | |
2012-07-05 14:40:04,918 ERROR org.apache.hadoop.mapred.ShuffleHandler: Shuffle error: | |
java.nio.channels.ClosedChannelException | |
at org.jboss.netty.channel.socket.nio.NioWorker.cleanUpWriteBuffer(NioWorker.java:617) | |
at org.jboss.netty.channel.socket.nio.NioWorker.close(NioWorker.java:593) | |
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:356) | |
at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:281) | |
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:201) | |
at org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) | |
at java.lang.Thread.run(Thread.java:662) | |
2012-07-05 14:40:04,919 ERROR org.apache.hadoop.mapred.ShuffleHandler: Shuffle error: | |
java.io.IOException: Connection reset by peer | |
at sun.nio.ch.FileDispatcher.read0(Native Method) | |
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21) | |
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:202) | |
at sun.nio.ch.IOUtil.read(IOUtil.java:169) | |
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:243) | |
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:322) | |
at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:281) | |
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:201) | |
at org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) | |
at java.lang.Thread.run(Thread.java:662) | |
2012-07-05 14:40:04,919 ERROR org.apache.hadoop.mapred.ShuffleHandler: Shuffle error [id: 0x3dea86c2, /192.168.201.202:54031 => /192.168.201.204:8080] EXCEPTION: java.io.IOException: Connection reset by peer | |
2012-07-05 14:40:04,919 ERROR org.apache.hadoop.mapred.ShuffleHandler: Shuffle error: | |
java.nio.channels.ClosedChannelException | |
at org.jboss.netty.channel.socket.nio.NioWorker.cleanUpWriteBuffer(NioWorker.java:617) | |
at org.jboss.netty.channel.socket.nio.NioWorker.close(NioWorker.java:593) | |
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:356) | |
at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:281) | |
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:201) | |
at org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) | |
at java.lang.Thread.run(Thread.java:662) | |
2012-07-05 14:40:04,921 ERROR org.apache.hadoop.mapred.ShuffleHandler: Shuffle error: | |
java.io.IOException: Connection reset by peer | |
at sun.nio.ch.FileDispatcher.read0(Native Method) | |
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21) | |
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:202) | |
at sun.nio.ch.IOUtil.read(IOUtil.java:169) | |
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:243) | |
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:322) | |
at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:281) | |
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:201) | |
at org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) | |
at java.lang.Thread.run(Thread.java:662) | |
2012-07-05 14:40:04,921 ERROR org.apache.hadoop.mapred.ShuffleHandler: Shuffle error [id: 0x3aa4f0ff, /192.168.201.202:54023 => /192.168.201.204:8080] EXCEPTION: java.io.IOException: Connection reset by peer | |
2012-07-05 14:40:04,921 ERROR org.apache.hadoop.mapred.ShuffleHandler: Shuffle error: | |
java.nio.channels.ClosedChannelException | |
at org.jboss.netty.channel.socket.nio.NioWorker.cleanUpWriteBuffer(NioWorker.java:617) | |
at org.jboss.netty.channel.socket.nio.NioWorker.close(NioWorker.java:593) | |
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:356) | |
at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:281) | |
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:201) | |
at org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) | |
at java.lang.Thread.run(Thread.java:662) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2012-07-04 23:13:51,789 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApp: Application application_1341396283952_0010 unreserved on node host: fc3:58886 #containers=1 available=6144 used=2048, currently has 0 at priority org.apache.hadoop.yarn.api.records.impl.pb.PriorityPBImpl@24; currentReservation memory: 0 | |
2012-07-04 23:13:51,789 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1341396283952_0010_01_000203 Container Transitioned from NEW to ALLOCATED | |
2012-07-04 23:13:51,789 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=marblejenka OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1341396283952_0010 CONTAINERID=container_1341396283952_0010_01_000203 | |
2012-07-04 23:13:51,789 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1341396283952_0010_01_000203 of capacity memory: 4096 on host fc3:58886, which currently has 2 containers, memory: 6144 used and memory: 2048 available | |
2012-07-04 23:13:51,790 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application=application_1341396283952_0010 container=Container: [ContainerId: container_1341396283952_0010_01_000203, NodeId: fc3:58886, NodeHttpAddress: fc3:8042, Resource: memory: 4096, Priority: org.apache.hadoop.yarn.api.records.impl.pb.PriorityPBImpl@24, State: NEW, Token: null, Status: container_id {, app_attempt_id {, application_id {, id: 10, cluster_timestamp: 1341396283952, }, attemptId: 1, }, id: 203, }, state: C_NEW, ] containerId=container_1341396283952_0010_01_000203 queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=18432MB, usedCapacity=0.75, absoluteUsedCapacity=0.75, numApps=1, numContainers=5 usedCapacity=0.75 absoluteUsedCapacity=0.75 used=memory: 18432 cluster=memory: 24576 | |
2012-07-04 23:13:52,781 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1341396283952_0010_01_000203 Container Transitioned from ALLOCATED to ACQUIRED | |
2012-07-04 23:13:53,794 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1341396283952_0010_01_000203 Container Transitioned from ACQUIRED to RUNNING | |
2012-07-04 23:13:54,397 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1341396283952_0010_01_000204 Container Transitioned from NEW to ALLOCATED | |
2012-07-04 23:13:54,397 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=marblejenka OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1341396283952_0010 CONTAINERID=container_1341396283952_0010_01_000204 | |
2012-07-04 23:13:54,397 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1341396283952_0010_01_000204 of capacity memory: 4096 on host fc2:47026, which currently has 2 containers, memory: 8192 used and memory: 0 available | |
2012-07-04 23:13:54,398 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application=application_1341396283952_0010 container=Container: [ContainerId: container_1341396283952_0010_01_000204, NodeId: fc2:47026, NodeHttpAddress: fc2:8042, Resource: memory: 4096, Priority: org.apache.hadoop.yarn.api.records.impl.pb.PriorityPBImpl@29, State: NEW, Token: null, Status: container_id {, app_attempt_id {, application_id {, id: 10, cluster_timestamp: 1341396283952, }, attemptId: 1, }, id: 204, }, state: C_NEW, ] containerId=container_1341396283952_0010_01_000204 queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=18432MB, usedCapacity=0.75, absoluteUsedCapacity=0.75, numApps=1, numContainers=5 usedCapacity=0.75 absoluteUsedCapacity=0.75 used=memory: 18432 cluster=memory: 24576 | |
2012-07-04 23:13:54,398 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting queues since queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=22528MB, usedCapacity=0.9166667, absoluteUsedCapacity=0.9166667, numApps=1, numContainers=6 | |
2012-07-04 23:13:54,398 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.9166667 absoluteUsedCapacity=0.9166667 used=memory: 22528 cluster=memory: 24576 | |
2012-07-04 23:13:54,800 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1341396283952_0010_01_000204 Container Transitioned from ALLOCATED to ACQUIRED | |
2012-07-04 23:13:55,402 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1341396283952_0010_01_000204 Container Transitioned from ACQUIRED to RUNNING | |
2012-07-04 23:14:27,484 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1341396283952_0010_01_000203 Container Transitioned from RUNNING to COMPLETED | |
2012-07-04 23:14:27,484 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApp: Completed container: container_1341396283952_0010_01_000203 in state: COMPLETED event:FINISHED | |
2012-07-04 23:14:27,484 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=marblejenka OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1341396283952_0010 CONTAINERID=container_1341396283952_0010_01_000203 | |
2012-07-04 23:14:27,484 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1341396283952_0010_01_000203 of capacity memory: 4096 on host fc3:58886, which currently has 1 containers, memory: 2048 used and memory: 6144 available, release resources=true | |
2012-07-04 23:14:27,484 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=memory: 18432 numContainers=5 user=marblejenka user-resources=memory: 18432 | |
2012-07-04 23:14:27,485 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1341396283952_0010_01_000203, NodeId: fc3:58886, NodeHttpAddress: fc3:8042, Resource: memory: 4096, Priority: org.apache.hadoop.yarn.api.records.impl.pb.PriorityPBImpl@24, State: NEW, Token: null, Status: container_id {, app_attempt_id {, application_id {, id: 10, cluster_timestamp: 1341396283952, }, attemptId: 1, }, id: 203, }, state: C_COMPLETE, diagnostics: "Container killed by the ApplicationMaster.\n\n", exit_status: 143, ] resource=memory: 4096 queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=18432MB, usedCapacity=0.75, absoluteUsedCapacity=0.75, numApps=1, numContainers=5 usedCapacity=0.75 absoluteUsedCapacity=0.75 used=memory: 18432 cluster=memory: 24576 | |
2012-07-04 23:14:27,485 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.75 absoluteUsedCapacity=0.75 used=memory: 18432 cluster=memory: 24576 | |
2012-07-04 23:14:27,485 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application appattempt_1341396283952_0010_000001 released container container_1341396283952_0010_01_000203 on node: host: fc3:58886 #containers=1 available=6144 used=2048 with event: FINISHED | |
2012-07-04 23:18:28,297 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1341396283952_0010_01_000201 Container Transitioned from RUNNING to COMPLETED | |
2012-07-04 23:18:28,297 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApp: Completed container: container_1341396283952_0010_01_000201 in state: COMPLETED event:FINISHED | |
2012-07-04 23:18:28,297 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=marblejenka OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1341396283952_0010 CONTAINERID=container_1341396283952_0010_01_000201 | |
2012-07-04 23:18:28,297 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1341396283952_0010_01_000201 of capacity memory: 4096 on host fc1:35615, which currently has 1 containers, memory: 4096 used and memory: 4096 available, release resources=true | |
2012-07-04 23:18:28,298 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=memory: 14336 numContainers=4 user=marblejenka user-resources=memory: 14336 | |
2012-07-04 23:18:28,298 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1341396283952_0010_01_000201, NodeId: fc1:35615, NodeHttpAddress: fc1:8042, Resource: memory: 4096, Priority: org.apache.hadoop.yarn.api.records.impl.pb.PriorityPBImpl@29, State: NEW, Token: null, Status: container_id {, app_attempt_id {, application_id {, id: 10, cluster_timestamp: 1341396283952, }, attemptId: 1, }, id: 201, }, state: C_COMPLETE, diagnostics: "Container killed by the ApplicationMaster.\n\n", exit_status: 143, ] resource=memory: 4096 queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=14336MB, usedCapacity=0.5833333, absoluteUsedCapacity=0.5833333, numApps=1, numContainers=4 usedCapacity=0.5833333 absoluteUsedCapacity=0.5833333 used=memory: 14336 cluster=memory: 24576 | |
2012-07-04 23:18:28,299 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.5833333 absoluteUsedCapacity=0.5833333 used=memory: 14336 cluster=memory: 24576 | |
2012-07-04 23:18:28,299 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application appattempt_1341396283952_0010_000001 released container container_1341396283952_0010_01_000201 on node: host: fc1:35615 #containers=1 available=4096 used=4096 with event: FINISHED | |
2012-07-04 23:20:03,221 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1341396283952_0010_01_000202 Container Transitioned from RUNNING to COMPLETED | |
2012-07-04 23:20:03,221 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApp: Completed container: container_1341396283952_0010_01_000202 in state: COMPLETED event:FINISHED | |
2012-07-04 23:20:03,221 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=marblejenka OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1341396283952_0010 CONTAINERID=container_1341396283952_0010_01_000202 | |
2012-07-04 23:20:03,221 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1341396283952_0010_01_000202 of capacity memory: 4096 on host fc2:47026, which currently has 1 containers, memory: 4096 used and memory: 4096 available, release resources=true | |
2012-07-04 23:20:03,221 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=memory: 10240 numContainers=3 user=marblejenka user-resources=memory: 10240 | |
2012-07-04 23:20:03,221 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1341396283952_0010_01_000202, NodeId: fc2:47026, NodeHttpAddress: fc2:8042, Resource: memory: 4096, Priority: org.apache.hadoop.yarn.api.records.impl.pb.PriorityPBImpl@29, State: NEW, Token: null, Status: container_id {, app_attempt_id {, application_id {, id: 10, cluster_timestamp: 1341396283952, }, attemptId: 1, }, id: 202, }, state: C_COMPLETE, diagnostics: "Container killed by the ApplicationMaster.\n\n", exit_status: 143, ] resource=memory: 4096 queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=10240MB, usedCapacity=0.41666666, absoluteUsedCapacity=0.41666666, numApps=1, numContainers=3 usedCapacity=0.41666666 absoluteUsedCapacity=0.41666666 used=memory: 10240 cluster=memory: 24576 | |
2012-07-04 23:20:03,221 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.41666666 absoluteUsedCapacity=0.41666666 used=memory: 10240 cluster=memory: 24576 | |
2012-07-04 23:20:03,221 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application appattempt_1341396283952_0010_000001 released container container_1341396283952_0010_01_000202 on node: host: fc2:47026 #containers=1 available=4096 used=4096 with event: FINISHED | |
2012-07-04 23:20:43,266 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1341396283952_0010_01_000204 Container Transitioned from RUNNING to COMPLETED | |
2012-07-04 23:20:43,266 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApp: Completed container: container_1341396283952_0010_01_000204 in state: COMPLETED event:FINISHED | |
2012-07-04 23:20:43,266 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=marblejenka OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1341396283952_0010 CONTAINERID=container_1341396283952_0010_01_000204 | |
2012-07-04 23:20:43,266 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1341396283952_0010_01_000204 of capacity memory: 4096 on host fc2:47026, which currently has 0 containers, memory: 0 used and memory: 8192 available, release resources=true | |
2012-07-04 23:20:43,266 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=memory: 6144 numContainers=2 user=marblejenka user-resources=memory: 6144 | |
2012-07-04 23:20:43,267 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1341396283952_0010_01_000204, NodeId: fc2:47026, NodeHttpAddress: fc2:8042, Resource: memory: 4096, Priority: org.apache.hadoop.yarn.api.records.impl.pb.PriorityPBImpl@29, State: NEW, Token: null, Status: container_id {, app_attempt_id {, application_id {, id: 10, cluster_timestamp: 1341396283952, }, attemptId: 1, }, id: 204, }, state: C_COMPLETE, diagnostics: "Container killed by the ApplicationMaster.\n\n", exit_status: 143, ] resource=memory: 4096 queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=6144MB, usedCapacity=0.25, absoluteUsedCapacity=0.25, numApps=1, numContainers=2 usedCapacity=0.25 absoluteUsedCapacity=0.25 used=memory: 6144 cluster=memory: 24576 | |
2012-07-04 23:20:43,267 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.25 absoluteUsedCapacity=0.25 used=memory: 6144 cluster=memory: 24576 | |
2012-07-04 23:20:43,268 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application appattempt_1341396283952_0010_000001 released container container_1341396283952_0010_01_000204 on node: host: fc2:47026 #containers=0 available=8192 used=0 with event: FINISHED |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment