Skip to content

Instantly share code, notes, and snippets.

View b-slim's full-sized avatar

Slim Bouguerra b-slim

View GitHub Profile
@b-slim
b-slim / JM_oom.log
Created November 19, 2020 20:53
Job manager runs out of memory when doing checkpoint
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3236) ~[?:1.8.0_172]
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118) ~[?:1.8.0_172]
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93) ~[?:1.8.0_172]
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153) ~[?:1.8.0_172]
at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1877) ~[?:1.8.0_172]
at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1786) ~[?:1.8.0_172]
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1189) ~[?:1.8.0_172]
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348) ~[?:1.8.0_172]
at akka.serialization.JavaSerializer$$anonfun$toBinary$1.apply$mcV$sp(Serializer.scala:324) ~[flink-dist_2.11-1.11.1.jar:1.11.1]
@b-slim
b-slim / TM_error_stack.log
Last active November 19, 2020 20:54
logs from TM about akka frame out of bound 100MB
2020-11-18 20:40:22,951 WARN org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler [] - Could not create remote rpc invocation message. Failing rpc i │
│ nvocation because... │
│ java.io.IOException: The rpc invocation size 199964727 exceeds the maximum akka framesize. │
│ at org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.createRpcInvocationMessage(AkkaInvocationHandler.java:276) [flink-dist_2.11-1.11.1.jar:1.11.1] │
│ at org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.invokeRpc(AkkaInvocationHandler.java:205) [flink-dist_2.11-1.11.1.jar:1.11.1] │
│ at org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.invoke(AkkaInvocationHandler.java:134) [flink-dist_2.11-1.11.1.jar:1.11.1] │
│ at org.apache.flink.runtime.rpc.akka.FencedAkkaI
#!/usr/bin/env awk -f
# Based on the idea from https://blogs.oracle.com/taylor22/entry/using_r_to_analyze_g1gc, the
# script is updated to use the format of the gc logs as received with the parameters:
# -XX:+UseThreadPriorities
# -XX:ThreadPriorityPolicy=42
# -Xms1995M -Xmx1995M
# -Xss256k -XX:StringTableSize=1000003
# -XX:SurvivorRatio=8
# -XX:MaxTenuringThreshold=1
public class TestCacheAllocationsEvictionsCycles {
private final long maxSize = 1024;
private BuddyAllocator allocator;
private MemoryManager memoryManager;
private LowLevelCachePolicy cachePolicy;
private LlapDaemonCacheMetrics cacheMetrics = LlapDaemonCacheMetrics.create("testCache", "testSession");
private Logger LOG = LoggerFactory.getLogger(TestCacheAllocationsEvictionsCycles.class);
private EvictionTracker evictionTracker;
private LowLevelCache dataCache = Mockito.mock(LowLevelCache.class);
@b-slim
b-slim / Training_support
Created October 24, 2017 12:32
support tutorial
# Node status
via Curl
```sh
curl localhost:8082/status
{"version":"0.10.1.2.6.3.0-220","modules":[{"name":"io.druid.query.aggregation.datasketches.theta.SketchModule","artifact":"druid-datasketches","version":"0.10.1.2.6.3.0-220"},{"name":"io.druid.query.aggregation.datasketches.theta.oldapi.OldApiSketchModule","artifact":"druid-datasketches","version":"0.10.1.2.6.3.0-220"},{"name":"io.druid.storage.hdfs.HdfsStorageDruidModule","artifact":"druid-hdfs-storage","version":"0.10.1.2.6.3.0-220"},{"name":"io.druid.indexing.kafka.KafkaIndexTaskModule","artifact":"druid-kafka-indexing-service","version":"0.10.1.2.6.3.0-220"},{"name":"io.druid.metadata.storage.mysql.MySQLMetadataStorageModule","artifact":"mysql-metadata-storage","version":"0.10.1.2.6.3.0-220"},{"name":"io.druid.emitter.ambari.metrics.AmbariMetricsEmitterModule","artifact":"ambari-metrics-emitter","version":"0.10.1.2.6.3.0-220"}],"memory":{"maxMemory":2058354688,"totalMemory":2058354688,"freeMemory":1727201736,"usedMemory":331152952}}
```
# Setup d
@b-slim
b-slim / extra_properties.json
Last active January 26, 2017 19:31
Extra properties for druid bi cluster
[
{
"druid-common": {
"properties": {
"druid.metadata.storage.type":"postgresql",
"druid.metadata.storage.connector.connectURI": "jdbc:postgresql://YOUR_RDS.us-west-2.rds.amazonaws.com:5432/YOUR_DBNAME",
"druid.metadata.storage.connector.user": "YOUR_USER",
"druid.metadata.storage.connector.password": "YOUR_DB_PASSWORD",
"druid.extensions.loadList": "[\"postgresql-metadata-storage\", \"druid-s3-extensions\"]"
}
@b-slim
b-slim / wikipediaJobSpecTemplate.json
Created January 26, 2017 00:17
druid indexing job spec file
{
"type" : "index_hadoop",
"spec" : {
"dataSchema" : {
"dataSource" : "wikipedia",
"parser" : {
"type" : "hadoopyString",
"parseSpec" : {
"format" : "json",
"timestampSpec" : {
mvn test -Dtest=TestCliDriver -Dqfile=explainanalyze_4.q
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Hive Integration - QFile Tests 2.2.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-no-snapshots) @ hive-it-qfile ---
[INFO]
[INFO] --- properties-maven-plugin:1.0-alpha-2:read-project-properties (default) @ hive-it-qfile ---