Skip to content

Instantly share code, notes, and snippets.

View mfenniak's full-sized avatar

Mathieu Fenniak mfenniak

View GitHub Profile
@mfenniak
mfenniak / gist:56f050b9a4877d1b07388e4d01de829c
Created May 23, 2017 14:00
4.4.51-40.58.amzn1.x86_64 ena output
May 17 06:30:26 ip-172-16-15-126 consul[13099]: 2017/05/17 06:30:26 [INFO] serf: attempting reconnect to i-0ffe3ee8de2922539 172.16.14.98:8301
May 17 06:30:46 ip-172-16-15-126 consul[13099]: 2017/05/17 06:30:46 [INFO] agent: Synced node info
May 17 06:31:35 ip-172-16-15-126 dhclient[3332]: XMT: Solicit on eth0, interval 109170ms.
May 17 06:31:39 ip-172-16-15-126 consul[13099]: 2017/05/17 06:31:39 [WARN] memberlist: Refuting a suspect message (from: i-09ee29f77300a149d)
May 17 06:32:45 ip-172-16-15-126 consul[13099]: 2017/05/17 06:32:45 [INFO] serf: EventMemberJoin: i-03549fcaa8f4e1636 172.16.15.66
May 17 06:32:49 ip-172-16-15-126 consul[13099]: 2017/05/17 06:32:49 [INFO] memberlist: Marking i-057b08569e51a333d as failed, suspect timeout reached
May 17 06:32:49 ip-172-16-15-126 consul[13099]: 2017/05/17 06:32:49 [INFO] serf: EventMemberLeave: i-057b08569e51a333d 172.16.16.105
May 17 06:32:53 ip-172-16-15-126 consul[13099]: 2017/05/17 06:32:53 [WARN] memberlist: Refuting a suspect me
# ========
# captured on: Fri May 12 20:07:59 2017
# hostname : ip-10-34-11-79
# os release : 4.4.0-70-generic
# perf version : 4.4.49
# arch : x86_64
# nrcpus online : 1
# nrcpus avail : 1
# cpudesc : Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz
# cpuid : GenuineIntel,6,63,2
Analysis of sampling postgres (pid 94571) every 1 millisecond
Process: postgres [94571]
Path: /Users/mathieu.fenniak/Development/pgsql-9.5.4-avoid-search/bin/postgres
Load Address: 0x1061db000
Identifier: postgres
Version: 0
Code Type: X86-64
Parent Process: postgres [94365]
Date/Time: 2017-05-10 09:40:17.180 -0600
Analysis of sampling postgres (pid 32304) every 1 millisecond
Process: postgres [32304]
Path: /usr/local/Cellar/postgresql/9.6.2/bin/postgres
Load Address: 0x10d9c1000
Identifier: postgres
Version: 0
Code Type: X86-64
Parent Process: postgres [1006]
Date/Time: 2017-05-05 14:07:12.371 -0600
@mfenniak
mfenniak / gist:3008378
Created June 28, 2012 02:27
Creates an aggregate called hstore_merge that merges PostgreSQL's HSTORE values using the concat (||) operator
CREATE OR REPLACE FUNCTION hstore_merge(left HSTORE, right HSTORE) RETURNS HSTORE AS $$
SELECT $1 || $2;
$$ LANGUAGE SQL;
CREATE AGGREGATE hstore_merge (HSTORE) (
SFUNC = hstore_merge,
STYPE = HSTORE,
INITCOND = ''
);
22:24:58.587 [StreamThread-1] ERROR o.a.k.c.c.i.ConsumerCoordinator - User provided listener org.apache.kafka.streams.processor.internals.StreamThread$1 for group timesheet-list failed on partition assignmen
t
org.apache.kafka.streams.errors.ProcessorStateException: Error opening store ServiceCenterParentFlatMap-topic at location /tmp/kafka-streams/timesheet-list/33_5/rocksdb/ServiceCenterParentFlatMap-topic
at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:173)
at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:143)
at org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:148)
at org.apache.kafka.streams.state.internals.MeteredKeyValueStore$7.run(MeteredKeyValueStore.java:100)
at org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:188)
at org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore
@mfenniak
mfenniak / log-sequence-CommitFailedException.log
Created March 10, 2017 18:03
Log output & sequence from Kafka Streams CommitFailedException
App starts up. Consumer config is logged showing max.poll.interval.ms = 1800000 (30 minutes)
{
"timestamp":"2017-03-08T22:13:51.139+00:00",
"message":"ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [10.20.4.219:9092, 10.20.6.54:9092]
check.crcs = true
client.id = timesheet-list-3d89bd2e-941f-4272-9916-aad4e6f82e36-StreamThread-1-consumer
connections.max.idle.ms = 540000
... more running log data ...
15:51:30.488 [StreamThread-1] INFO o.a.k.s.p.internals.StreamThread - stream-thread [StreamThread-1] Flushing state stores of task 19_2
15:51:30.488 [StreamThread-1] INFO o.a.k.s.p.internals.StreamThread - stream-thread [StreamThread-1] Flushing state stores of task 20_1
15:51:30.488 [StreamThread-1] INFO o.a.k.s.p.internals.StreamThread - stream-thread [StreamThread-1] Flushing state stores of task 21_1
15:51:30.488 [StreamThread-1] INFO o.a.k.s.p.internals.StreamThread - stream-thread [StreamThread-1] Flushing state stores of task 21_3
15:51:30.489 [StreamThread-1] INFO o.a.k.s.p.internals.StreamThread - stream-thread [StreamThread-1] Closing the state manager of task 2_0
15:51:30.501 [StreamThread-1] INFO o.a.k.s.p.internals.StreamThread - stream-thread [StreamThread-1] Closing the state manager of task 3_0
15:51:30.501 [StreamThread-1] INFO o.a.k.s.p.internals.StreamThread - stream-thread [StreamThread-1] Closing the state manager of task 1_2
15:51:30.518 [StreamThread
@mfenniak
mfenniak / gist:3006798
Created June 27, 2012 20:56
Plugin that enables unicode decoding for PostgreSQL's CITEXT column type when using psycopg2 and SQLAlchemy
from sqlalchemy.event import listen
def register_citext_type(conn, con_record):
from psycopg2.extensions import new_type, register_type
from contextlib import closing
def cast_citext(in_str, cursor):
if in_str == None:
return None
return unicode(in_str, cursor.connection.encoding)
with closing(conn.cursor()) as c:
c.execute("SELECT pg_type.oid FROM pg_type WHERE typname = 'citext'")
@mfenniak
mfenniak / gist:509fb82dfcfda79a21cfc1b07dafa89c
Created December 4, 2016 03:43
kafka-streams error w/ trunk @ e43bbce
java.lang.IllegalStateException: Attempting to put a clean entry for key [urn:replicon-tenant:strprc971e3ca9:timesheet:97c0ce25-e039-4e8b-9f2c-d43f0668b755] into NamedCache [0_0-TimesheetNonBillableHours] when it already contains a dirty entry for the same key
at org.apache.kafka.streams.state.internals.NamedCache.put(NamedCache.java:124)
at org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:120)
at org.apache.kafka.streams.state.internals.CachingKeyValueStore.get(CachingKeyValueStore.java:146)
at org.apache.kafka.streams.state.internals.CachingKeyValueStore.get(CachingKeyValueStore.java:133)
at org.apache.kafka.streams.kstream.internals.KTableAggregate$KTableAggregateValueGetter.get(KTableAggregate.java:128)
at org.apache.kafka.streams.kstream.internals.KTableKTableLeftJoin$KTableKTableLeftJoinProcessor.process(KTableKTableLeftJoin.java:81)
at org.apache.kafka.streams.kstream.internals.KTableKTableLeftJoin$KTableKTableLeftJoinProcessor.process(KTableKTableLeftJoin.java:54)
at o