Skip to content

Instantly share code, notes, and snippets.

View b0bu's full-sized avatar
🐢
yo!

[0] b0bu

🐢
yo!
View GitHub Profile
@b0bu
b0bu / letsencrypt_ansible_stage_to_prod.md
Last active May 24, 2021 13:23
test letsencrypt challenges against their staging api before rolling to production

Here's an example of letting ansible provision certificates and test challenges against a dns provider from the stage api and then rolling on to the production api when it's successful. This ensures you don't hit an api limit with LE and that dns and challenge funcationality is working properly. Note the task file is being reused and vars: are passed like a function signature.

flags is used in the pull.sh and server/quiet are used in the cli.ini. There's a cron element not shown here which would use a renewal script once the initial pull is issued by ansible.

# ansible-playbook -i inventory le.yaml --tags test-letsencrypt-challenge
---
- import_tasks: issue-certificates.yml
  vars:
@b0bu
b0bu / daft_timer_in_python.md
Created May 25, 2021 11:28
daft_timer_in_python

Maybe the daftest of all gists. Indeed for timing anything timeout n or sleep n are what you should absolutely use in bash. But I thought the visual was cool for this one.

timer () {
t=${1:-60}
python -c '
import time
import sys
t = int(sys.argv[1])
for i in range(t):
 print(f"\r {t-i}", end="")
@b0bu
b0bu / mariadb_gdb_coredump.md
Created May 25, 2021 13:11
corefiles in the kernel for mariadb

Enable coredumps for the kernel

mkdir /data/corefiles
chmod 777 /data/corefiles
echo /data/corefiles/core > /proc/sys/kernel/core_pattern
echo 1 > /proc/sys/kernel/core_uses_pid
sysctl -w fs.suid_dumpable=2
cat <<SETCORE > /etc/sysctl.d/mariadb_core.conf
kernel.core_pattern=/data/corefiles/core
kernel.core_uses_pid=1
@b0bu
b0bu / call_playbook_with_vars.md
Created May 25, 2021 14:42
ansible - pass interpolated values to import_playbook

This is possible:

---
- import_playbook: collection.base.onprem
  vars:
    icinga_servers: ["server1", "server2"]
    auter_cron_prep_spec: "0 5 * * Mon root"
    auter_cron_apply_spec: "0 6 * * Mon root"

However this is not

@b0bu
b0bu / haproxy_metrics.md
Created May 26, 2021 12:39
haproxy stats: qtime,ctime,rtime,ttime?

Previously added this to a response on stackoverflow. In haproxy >2 you now get two values n / n which is the max within a sliding window and the average for that window. The max value remains the max across all sample windows until a higher value is found. On 1.8 you only get the average.

Example of haproxy 2 v 1.8. Note these proxies are used very differently and with dramatically different loads.

So looks like the average response times at least since last reboot are 66m and 275ms.

The average is computed as:

data time + cumulative http connections - 1 / cumulative http connections
@b0bu
b0bu / local_ansible_playbook.md
Last active May 26, 2021 14:14
runing ansible-playbook locally

Quick only the fly without having to change the playbook (you won't have groups though)

 ansible-playbook playbook.yml --connection=local --inventory 127.0.0.1, --tags whatever
@b0bu
b0bu / array_v_list.md
Last active May 27, 2021 13:14
array v list in python

Something I thought was super interesting

# %%
from array import array

list_of_1_million_signed_ints = list(range(0, 10**6))
array_of_1_million_signed_ints = array("I", list_of_1_million_signed_ints)

print(f"size of list in mb {list_of_1_million_signed_ints.__sizeof__() / 2**20:.2f}MiB")
print(f"size of array in mb {array_of_1_million_signed_ints.__sizeof__() / 2**20:.2f}MiB")
@b0bu
b0bu / haproxy_whitelists.md
Last active June 16, 2021 22:44
Whitelists in haproxy (the right way)

tldr; Don't just test a whitelist based on an initial pass/fail. An update to that whitelist or addition of a parameter to a use_backend statement alone can cause a routing mess.

I don't normally say things like "the right way" but in this case attention to detail is usually always the right way. We had two use_backend statements in haproxy shown below where when an IP address wasn’t in the whitelist it would be routed straight to production. The proposed fix for this meant that traffic in the whitelist would always be routed to production. Which is the opposite of what I believe was intended in both cases.

  use_backend b1 if host-site worldpay_callback worldpay_whitelist worldpay_env_dev worldpay_auth
  use_backend b2 if host-site worldpay_callback worldpay_whitelist worldpay_env_prd worldpay_auth

This works, you can put whitelist evaluation in a use_backend statement but if it's nested inside a larger scope and the logic falls through it's going to bite you. Troubleshooting this par

@b0bu
b0bu / ansible_gather_facts_conditiional.md
Last active June 3, 2021 09:47
conditionally gather facts in ansible

Fact gathering can take a long time, especially on centos/rhel boxes and especially on proxies where your open file limit might be high. 2-3 minutes or longer. If I have a play that's only supposed to run some of the time based on a combination or when: or tag: statements but I need that play to gather facts it can slow down / gather facts at the wrong play. This if anything, gives the illusion that that play is actually doing something when it's not even supposed to be running. When your automation runs in a pipelines and no ones watching it you want to make sure that the feedback and logging is as clean as possible. I.e. what ran was all that was supposed to run in the way it was supposed to be ran.

You can gather facts on a certain play based on whether the tag that runs that plays tasks was provided to the ansible engine. Now this play won't gather facts and won't log as having done anything unless keepalived is explicitly provided on the command line. Gathering of facts will fall through to subseq

@b0bu
b0bu / ansible_2_members_1_vip.md
Last active July 9, 2021 10:19
2 members 1 vip

I have 2 member servers on an L2 network

I have some post deploy sanity checks to do on proxy pairs using keepalived where after initial deployment or changing the vip configuration the 'network' subset of ansible_facts is too slow in regathering. I.e. if a service is broken, I'm not going to sit there for 3 minutes or longer waiting for ansible to crawl my servers. The solution is to just pull the information you need, and test it.

requirements

Where vip can be vips and states can be:

vip server1 - proceed
vip server2 - proceed
vip no servers - error