Skip to content

Instantly share code, notes, and snippets.

@buryat
Last active December 14, 2015 20:28
Show Gist options
  • Save buryat/5143693 to your computer and use it in GitHub Desktop.
Save buryat/5143693 to your computer and use it in GitHub Desktop.
Moxi 1.8.1 memory leaks
10x m1.large Amazon EC2 instances
4 buckets, ~80M items.
service couchbase-server stop/start on a node and rebalancing cluster then.
valgrind --leak-check=full --tool=memcheck moxi …
==16448==
==16448== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 25 from 1)
==16448== malloc/free: in use at exit: 176,736,618 bytes in 11,245 blocks.
==16448== malloc/free: 76,305,616 allocs, 76,294,371 frees, 3,281,793,242 bytes allocated.
==16448== For counts of detected errors, rerun with: -v
==16448== searching for pointers to 11,245 not-freed blocks.
==16448== checked 47,364,372 bytes.
==16448==
==16448== 458 bytes in 17 blocks are definitely lost in loss record 36 of 81
==16448== at 0x40213C0: malloc (vg_replace_malloc.c:149)
==16448== by 0x40E4E0F: strdup (in /lib/libc-2.5.so)
==16448== by 0x806698A: zstored_acquire_downstream_conn (cproxy.c:3397)
==16448== by 0x8066E9E: cproxy_connect_downstream (cproxy.c:1467)
==16448== by 0x80721F8: cproxy_forward_a2b_downstream (cproxy_protocol_a2b.c:1102)
==16448== by 0x8065277: cproxy_assign_downstream (cproxy.c:1868)
==16448== by 0x806B281: cproxy_process_upstream_ascii (cproxy_protocol_a.c:144)
==16448== by 0x805248E: try_read_command (memcached.c:3203)
==16448== by 0x8052CD7: drive_machine (memcached.c:3536)
==16448== by 0x8090152: event_process_active_single_queue (event.c:1308)
==16448== by 0x8090911: event_base_loop (event.c:1375)
==16448== by 0x805BEB0: worker_libevent (thread.c:272)
==16448==
==16448==
==16448== 544 bytes in 4 blocks are possibly lost in loss record 37 of 81
==16448== at 0x40206FF: calloc (vg_replace_malloc.c:279)
==16448== by 0x4010D69: _dl_allocate_tls (in /lib/ld-2.5.so)
==16448== by 0x4062E53: pthread_create@@GLIBC_2.1 (in /lib/libpthread-2.5.so)
==16448== by 0x805BC38: thread_init (thread.c:177)
==16448== by 0x8054977: main (memcached.c:4924)
==16448==
==16448==
==16448== 1,454,080 bytes in 71 blocks are possibly lost in loss record 80 of 81
==16448== at 0x40206FF: calloc (vg_replace_malloc.c:279)
==16448== by 0x80D839A: populate_buckets (vbucket.c:261)
==16448== by 0x80D891B: parse_vbucket_config (vbucket.c:341)
==16448== by 0x80D8B6C: parse_cjson (vbucket.c:433)
==16448== by 0x80D8E81: vbucket_config_parse_string (vbucket.c:461)
==16448== by 0x807AB09: lvb_create (mcs.c:157)
==16448== by 0x806339A: cproxy_check_downstream_config (cproxy.c:1327)
==16448== by 0x8064ED5: cproxy_reserve_downstream (cproxy.c:946)
==16448== by 0x80652EE: cproxy_assign_downstream (cproxy.c:1790)
==16448== by 0x806B281: cproxy_process_upstream_ascii (cproxy_protocol_a.c:144)
==16448== by 0x805248E: try_read_command (memcached.c:3203)
==16448== by 0x8052CD7: drive_machine (memcached.c:3536)
==16448==
==16448==
==16448== 170,803,200 bytes in 8,340 blocks are definitely lost in loss record 81 of 81
==16448== at 0x40206FF: calloc (vg_replace_malloc.c:279)
==16448== by 0x80D839A: populate_buckets (vbucket.c:261)
==16448== by 0x80D891B: parse_vbucket_config (vbucket.c:341)
==16448== by 0x80D8B6C: parse_cjson (vbucket.c:433)
==16448== by 0x80D8E81: vbucket_config_parse_string (vbucket.c:461)
==16448== by 0x807AB09: lvb_create (mcs.c:157)
==16448== by 0x806339A: cproxy_check_downstream_config (cproxy.c:1327)
==16448== by 0x8063CB1: cproxy_release_downstream (cproxy.c:1191)
==16448== by 0x8065B2F: cproxy_release_downstream_conn (cproxy.c:2151)
==16448== by 0x80661E1: cproxy_on_pause_downstream_conn (cproxy.c:2203)
==16448== by 0x8052E13: drive_machine (memcached.c:3749)
==16448== by 0x8090152: event_process_active_single_queue (event.c:1308)
==16448==
==16448== LEAK SUMMARY:
==16448== definitely lost: 170,803,658 bytes in 8,357 blocks.
==16448== possibly lost: 1,454,624 bytes in 75 blocks.
==16448== still reachable: 4,478,336 bytes in 2,813 blocks.
==16448== suppressed: 0 bytes in 0 blocks.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment