Skip to content

Instantly share code, notes, and snippets.

@s1monw
Created February 27, 2013 17:09
Show Gist options
  • Save s1monw/5049626 to your computer and use it in GitHub Desktop.
Save s1monw/5049626 to your computer and use it in GitHub Desktop.
OOM #1
[2013-02-27 17:53:43,619][INFO ][test ] ==> Test Success [integration.recovery.RelocationTests#testPrimaryRelocationWhileBulkIndexingWith10RelocationAnd1Writer]
[2013-02-27 17:53:43,619][INFO ][node ] [node1] {0.21.0.Beta1-SNAPSHOT}[30362]: stopping ...
[2013-02-27 17:53:43,659][INFO ][node ] [node1] {0.21.0.Beta1-SNAPSHOT}[30362]: stopped
[2013-02-27 17:53:43,659][INFO ][node ] [node1] {0.21.0.Beta1-SNAPSHOT}[30362]: closing ...
[2013-02-27 17:53:43,659][INFO ][cluster.service ] [node2] master {new [node2][190][local[226]]{local=true}, previous [node1][189][local[225]]{local=true}}, removed {[node1][189][local[225]]{local=true},}, reason: local-disco-update
[2013-02-27 17:53:43,661][INFO ][node ] [node1] {0.21.0.Beta1-SNAPSHOT}[30362]: closed
[2013-02-27 17:53:43,661][INFO ][node ] [node2] {0.21.0.Beta1-SNAPSHOT}[30362]: stopping ...
[2013-02-27 17:53:43,673][INFO ][node ] [node2] {0.21.0.Beta1-SNAPSHOT}[30362]: stopped
[2013-02-27 17:53:43,674][INFO ][node ] [node2] {0.21.0.Beta1-SNAPSHOT}[30362]: closing ...
[2013-02-27 17:53:43,675][INFO ][node ] [node2] {0.21.0.Beta1-SNAPSHOT}[30362]: closed
[2013-02-27 17:53:43,676][INFO ][test ] ==> Test Starting [integration.recovery.RelocationTests#testPrimaryRelocationWhileBulkIndexingWith10RelocationAnd5Writers]
[2013-02-27 17:53:43,676][INFO ][test.integration.recovery] --> starting [node1] ...
[2013-02-27 17:53:43,678][INFO ][node ] [node1] {0.21.0.Beta1-SNAPSHOT}[30362]: initializing ...
[2013-02-27 17:53:43,678][INFO ][plugins ] [node1] loaded [], sites []
[2013-02-27 17:53:43,758][INFO ][node ] [node1] {0.21.0.Beta1-SNAPSHOT}[30362]: initialized
[2013-02-27 17:53:43,758][INFO ][node ] [node1] {0.21.0.Beta1-SNAPSHOT}[30362]: starting ...
[2013-02-27 17:53:43,759][INFO ][transport ] [node1] bound_address {local[227]}, publish_address {local[227]}
[2013-02-27 17:53:43,759][INFO ][cluster.service ] [node1] new_master [node1][191][local[227]]{local=true}, reason: local-disco-initial_connect(master)
[2013-02-27 17:53:43,760][INFO ][discovery ] [node1] test-cluster-monster/191
[2013-02-27 17:53:43,760][INFO ][gateway ] [node1] recovered [0] indices into cluster_state
[2013-02-27 17:53:43,764][INFO ][http ] [node1] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.1.4:9200]}
[2013-02-27 17:53:43,764][INFO ][node ] [node1] {0.21.0.Beta1-SNAPSHOT}[30362]: started
[2013-02-27 17:53:43,764][INFO ][test.integration.recovery] --> creating test index ...
[2013-02-27 17:53:43,777][INFO ][cluster.metadata ] [node1] [test] creating index, cause [api], shards [1]/[0], mappings []
[2013-02-27 17:53:43,784][INFO ][test.integration.recovery] --> starting [node2] ...
[2013-02-27 17:53:43,785][INFO ][node ] [node2] {0.21.0.Beta1-SNAPSHOT}[30362]: initializing ...
[2013-02-27 17:53:43,786][INFO ][plugins ] [node2] loaded [], sites []
[2013-02-27 17:53:43,861][INFO ][node ] [node2] {0.21.0.Beta1-SNAPSHOT}[30362]: initialized
[2013-02-27 17:53:43,861][INFO ][node ] [node2] {0.21.0.Beta1-SNAPSHOT}[30362]: starting ...
[2013-02-27 17:53:43,862][INFO ][transport ] [node2] bound_address {local[228]}, publish_address {local[228]}
[2013-02-27 17:53:43,862][INFO ][cluster.service ] [node1] added {[node2][192][local[228]]{local=true},}, reason: local-disco-receive(from node[[node2][192][local[228]]{local=true}])
[2013-02-27 17:53:43,863][INFO ][cluster.service ] [node2] detected_master [node1][191][local[227]]{local=true}, added {[node1][191][local[227]]{local=true},}, reason: local-disco-receive(from master)
[2013-02-27 17:53:43,863][INFO ][discovery ] [node2] test-cluster-monster/192
[2013-02-27 17:53:43,877][INFO ][http ] [node2] bound_address {inet[/0:0:0:0:0:0:0:0:9201]}, publish_address {inet[/192.168.1.4:9201]}
[2013-02-27 17:53:43,877][INFO ][node ] [node2] {0.21.0.Beta1-SNAPSHOT}[30362]: started
[2013-02-27 17:53:43,877][INFO ][test.integration.recovery] --> starting 5 indexing threads
[2013-02-27 17:53:43,877][INFO ][test.integration.recovery] **** starting indexing thread 0
[2013-02-27 17:53:43,878][INFO ][test.integration.recovery] **** starting indexing thread 1
[2013-02-27 17:53:43,878][INFO ][test.integration.recovery] **** starting indexing thread 2
[2013-02-27 17:53:43,878][INFO ][test.integration.recovery] **** starting indexing thread 3
[2013-02-27 17:53:43,878][INFO ][test.integration.recovery] --> waiting for 2000 docs to be indexed ...
[2013-02-27 17:53:43,879][INFO ][test.integration.recovery] **** starting indexing thread 4
[2013-02-27 17:53:43,890][INFO ][cluster.metadata ] [node1] [test] update_mapping [type1] (dynamic)
[2013-02-27 17:53:44,003][INFO ][test.integration.recovery] --> 2000 docs indexed
[2013-02-27 17:53:44,004][INFO ][test.integration.recovery] --> starting relocations...
[2013-02-27 17:53:44,004][INFO ][test.integration.recovery] --> START relocate the shard from node1 to node2
[2013-02-27 17:53:46,406][INFO ][test.integration.recovery] --> DONE relocate the shard from node1 to node2
[2013-02-27 17:53:46,407][INFO ][test.integration.recovery] --> START relocate the shard from node2 to node1
[2013-02-27 17:53:51,735][INFO ][test.integration.recovery] --> DONE relocate the shard from node2 to node1
[2013-02-27 17:53:51,735][INFO ][test.integration.recovery] --> START relocate the shard from node1 to node2
[2013-02-27 17:54:00,656][INFO ][test.integration.recovery] --> DONE relocate the shard from node1 to node2
[2013-02-27 17:54:00,656][INFO ][test.integration.recovery] --> START relocate the shard from node2 to node1
[2013-02-27 17:54:18,055][INFO ][test.integration.recovery] --> DONE relocate the shard from node2 to node1
[2013-02-27 17:54:18,055][INFO ][test.integration.recovery] --> START relocate the shard from node1 to node2
[2013-02-27 17:54:25,863][INFO ][test.integration.recovery] --> DONE relocate the shard from node1 to node2
[2013-02-27 17:54:25,863][INFO ][test.integration.recovery] --> START relocate the shard from node2 to node1
[2013-02-27 17:54:37,397][INFO ][test.integration.recovery] --> DONE relocate the shard from node2 to node1
[2013-02-27 17:54:37,397][INFO ][test.integration.recovery] --> START relocate the shard from node1 to node2
[2013-02-27 17:54:48,948][INFO ][test.integration.recovery] --> DONE relocate the shard from node1 to node2
[2013-02-27 17:54:48,948][INFO ][test.integration.recovery] --> START relocate the shard from node2 to node1
[2013-02-27 17:54:59,851][INFO ][test.integration.recovery] --> DONE relocate the shard from node2 to node1
[2013-02-27 17:54:59,851][INFO ][test.integration.recovery] --> START relocate the shard from node1 to node2
[2013-02-27 17:55:10,865][INFO ][test.integration.recovery] --> DONE relocate the shard from node1 to node2
[2013-02-27 17:55:10,866][INFO ][test.integration.recovery] --> START relocate the shard from node2 to node1
[2013-02-27 17:55:23,927][INFO ][test.integration.recovery] --> DONE relocate the shard from node2 to node1
[2013-02-27 17:55:23,927][INFO ][test.integration.recovery] --> done relocations
[2013-02-27 17:55:23,927][INFO ][test.integration.recovery] --> marking and waiting for indexing threads to stop ...
[2013-02-27 17:55:24,303][INFO ][test.integration.recovery] **** done indexing thread 4
[2013-02-27 17:55:24,305][INFO ][test.integration.recovery] **** done indexing thread 2
[2013-02-27 17:55:24,305][INFO ][test.integration.recovery] **** done indexing thread 1
[2013-02-27 17:55:24,306][INFO ][test.integration.recovery] **** done indexing thread 0
[2013-02-27 17:55:24,306][INFO ][test.integration.recovery] **** done indexing thread 3
[2013-02-27 17:55:24,307][INFO ][test.integration.recovery] --> indexing threads stopped
[2013-02-27 17:55:24,307][INFO ][test.integration.recovery] --> refreshing the index
[2013-02-27 17:55:24,345][INFO ][test.integration.recovery] --> searching the index
[2013-02-27 17:55:24,345][INFO ][test.integration.recovery] --> START search test round 1
Exception in thread "elasticsearch[node1][search][T#1]" java.lang.OutOfMemoryError: Java heap space
at org.elasticsearch.search.internal.InternalSearchHit.<init>(InternalSearchHit.java:95)
at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:162)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:326)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFetch(SearchServiceTransportAction.java:243)
at org.elasticsearch.action.search.type.TransportSearchQueryAndFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryAndFetchAction.java:75)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:205)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:192)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:178)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Exception in thread "elasticsearch[node1][generic][T#3]" java.lang.OutOfMemoryError: Java heap space
at com.google.common.collect.Iterators.forArray(Iterators.java:1155)
at com.google.common.collect.RegularImmutableList.listIterator(RegularImmutableList.java:96)
at com.google.common.collect.RegularImmutableAsList.listIterator(RegularImmutableAsList.java:54)
at com.google.common.collect.ImmutableList.listIterator(ImmutableList.java:334)
at com.google.common.collect.ImmutableList.iterator(ImmutableList.java:330)
at com.google.common.collect.RegularImmutableMap$EntrySet.iterator(RegularImmutableMap.java:218)
at com.google.common.collect.ImmutableMapValues.iterator(ImmutableMapValues.java:44)
at org.elasticsearch.cluster.node.DiscoveryNodes.iterator(DiscoveryNodes.java:71)
at org.elasticsearch.cluster.node.DiscoveryNodes.iterator(DiscoveryNodes.java:47)
at org.elasticsearch.cluster.service.InternalClusterService$ReconnectToNodes.run(InternalClusterService.java:391)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Exception in thread "elasticsearch[node1][[ttl_expire]]" java.lang.OutOfMemoryError: Java heap space
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment