Skip to content

Instantly share code, notes, and snippets.

View toddlipcon's full-sized avatar

Todd Lipcon toddlipcon

View GitHub Profile
(pyarrow-nightly-3.7) todd@va1022:/data/1/todd/kudu$ perf record -a python test.py arrow
Couldn't synthesize bpf events.
Read 200,000,000 rows in 2.6614990234375s (75,145,622 rows/sec, 150,291,244 cells/sec, 859MB/sec)
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 1.980 MB perf.data (21404 samples) ]
(pyarrow-nightly-3.7) todd@va1022:/data/1/todd/kudu$ perf report --percent-limit 1 --stdio --sort=sym | c++filt
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#include <boost/optional.hpp>
using namespace boost;
class C {
public:
C() :
data_(initial_data_) {
}
workers workload system qps
------------------------------------------------------------
4 single-groupby-5-8-1 victoriametrics 964.85
8 single-groupby-5-8-1 victoriametrics 1117.75
9 single-groupby-5-8-1 victoriametrics 1120.69
10 single-groupby-5-8-1 victoriametrics 1121.38
11 single-groupby-5-8-1 victoriametrics 1121.42
12 single-groupby-5-8-1 victoriametrics 1120.82
13 single-groupby-5-8-1 victoriametrics 1120.62
14 single-groupby-5-8-1 victoriametrics 1120.51
> select * from foo
name: foo
time f1 tag1 tag2
---- -- ---- ----
1579885717128019698 100 v1 v2
1579885727536934291 200 v1
> select max(f1) from foo group by tag2
name: foo
tags: tag2=
time max
todd@turbo:~/kudu$ ./build/latest/bin/kudu table statistics localhost metrics.cpu 2>/dev/null
TABLE metrics.cpu
on disk size: 1709128375
live row count: 103680000
todd@turbo:~/kudu$ time ./build/latest/bin/kudu table scan localhost metrics.cpu --columns 'series_id,timestamp' --predicates '["AND", [">", "usage_guest", 99]]' -num-threads 1 2>/dev/null | wc -l
725143
real 0m2.056s
user 0m0.555s
import java.util.Base64;
import org.bouncycastle.asn1.util.*;
import org.bouncycastle.asn1.*;
class Test {
static DERSequence makeSalt(int type, String salt) throws Exception {
return new DERSequence(new ASN1Encodable[]{
new DERTaggedObject(true, 0, new DERInteger(type)),
new DERTaggedObject(true, 1, new DEROctetString(salt.getBytes("UTF-8")))
});
import java.util.Base64;
import org.bouncycastle.asn1.util.*;
import org.bouncycastle.asn1.*;
class Test {
static DERSequence makeSalt(int type, String salt) throws Exception {
ASN1EncodableVector vec = new ASN1EncodableVector();
vec.add(new DERTaggedObject(true, 0, new DERInteger(type)));
vec.add(new DERTaggedObject(true, 1, new DEROctetString(salt.getBytes("UTF-8"))));
return new DERSequence(vec);

An interesting question

This morning, one of my colleagues asked me the following interesting question:

So, I have been wondering about this question (which is not necessarily applicable to Kudu): when certain software says they use a single hot replica for failover, how does it handle confusion caused by transient network failure which breaks the communication between the primary and second ? In other words, how can such configuration guarantee that both nodes won't think of itself as the primary? Do they usually fall back to some third party as an arbitrator ? However, that third party may itself suffer random network partition with one or either of the nodes

This led to my brain-dumping for 20 minutes on Slack, which I then figured I'd copy-paste into a "blog post" gist.

My brain-dump

import java.io.IOException;
import java.io.PrintWriter;
import java.util.concurrent.ForkJoinPool;
import java.util.concurrent.CountDownLatch;
import javax.servlet.AsyncContext;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
commit 9bf51d73c66e36c5f5d26eb161683507d5717069 (HEAD)
Author: Todd Lipcon <todd@apache.org>
Date: Mon Jul 15 23:52:10 2019 -0700
wip: wire protocol opt
Change-Id: I118b01ae2acfd8dd290e7f67445796635d305351
diff --git a/src/kudu/common/wire_protocol-test.cc b/src/kudu/common/wire_protocol-test.cc
index 0317cab47..68e22bdc8 100644