Skip to content

Instantly share code, notes, and snippets.

View indrekj's full-sized avatar

Indrek Juhkam indrekj

View GitHub Profile
FROM centos:7
# I've removed lualdap and lua-resty-ldap from rockspec/apisix-master-0.rockspec
# NB! Do not copy `deps` if they're present locally
COPY ./ /usr/local/apisix
WORKDIR /usr/local/apisix
RUN ./ci/centos7-ci.sh install_dependencies
require 'timeout'
x = Time.now
begin
Timeout.timeout(5) do
sleep 0
ensure
sleep 10
end
@indrekj
indrekj / truenas-kubectl.md
Last active June 16, 2024 12:20
How to acccess TrueNAS kubectl remotely?

How to acccess TrueNAS kubectl remotely from your local computer?

DISCLAIMER: This is an unofficial guide. If you mess things up then you may lock yourself out of TrueNAS or even worse, make it unusable. There's also no guarantee that this works in the future.

Through SSH

Currently the easiest way to access kubectl is through ssh and k3s tool. If you have ssh access enabled then you can ssh to your TrueNAS server and use it

[debug] OTLP tracer :opentelemetry_exporter failed to initialize: exception throw: {application_either_not_started_or_not_ready,
tls_certificate_check}
in function tls_certificate_check_shared_state:latest_shared_state_key/0 (/home/indrek/gems/hello/deps/tls_certificate_check/src/tls_certificate_check_shared_state.erl, line 338)
in call from tls_certificate_check_shared_state:get_latest_shared_state/0 (/home/indrek/gems/hello/deps/tls_certificate_check/src/tls_certificate_check_shared_state.erl, line 320)
in call from tls_certificate_check_shared_state:authoritative_certificate_values/0 (/home/indrek/gems/hello/deps/tls_certificate_check/src/tls_certificate_check_shared_state.erl, line 126)
in call from tls_certificate_check:options/1 (/home/indrek/gems/hello/deps/tls_certificate_check/src/tls_certificate_check.erl, line 78)
in call from opentelemetry_exporter:parse_endpoint/2 (/home/indrek/gems/hello/deps/opentelemetry_exporter/src/opentelemetry_exporter.erl, line 275)
in ca

Setup

First I created k3d cluster and installed our old istio there (1.6.11) using kubectl apply in sm-configuration dir. Also added couple of pods with sidecar enabled.

Before upgrade (background check)

Started recurring curl just to see if there's a disruption

3c3
< for (var n, a, o = t[0], i = t[1], u = 0, s = []; u < o.length; u++) a = o[u], r[a] && s.push(r[a][0]), r[a] = 0;
---
> for (var n, a, o = t[0], i = t[1], u = 0, c = []; u < o.length; u++) a = o[u], r[a] && c.push(r[a][0]), r[a] = 0;
5c5
< for (c && c(t); s.length;) s.shift()()
---
> for (s && s(t); c.length;) c.shift()()
9c9
< 2: 0
3c3
< for (var n, a, o = t[0], i = t[1], u = 0, s = []; u < o.length; u++) a = o[u], r[a] && s.push(r[a][0]), r[a] = 0;
---
> for (var n, a, o = t[0], i = t[1], u = 0, c = []; u < o.length; u++) a = o[u], r[a] && c.push(r[a][0]), r[a] = 0;
5c5
< for (c && c(t); s.length;) s.shift()()
---
> for (s && s(t); c.length;) c.shift()()
9c9
< 2: 0
{
"type": "index_parallel",
"spec": {
"dataSchema": {
"dataSource": "visitor_events-sessions-test1",
"timestampSpec": {
"column": "timestamp",
"format": "auto"
},
"dimensionsSpec": {
{
"type": "index_parallel",
"spec": {
"dataSchema": {
"dataSource": "visitor_events-sessions-test1",
"timestampSpec": {
"column": "timestamp",
"format": "auto"
},
"dimensionsSpec": {