- Architecture Threat modelling: https://partyrock.aws/u/testinguser883/R4PI1UIc2/Architecture-Threat-Modeler
- Speaker Spotlight: https://partyrock.aws/u/ChloeMcA/8_LQK-Hqq/SpeakerSpotlight/snapshot/9nkN1GQr_
# Install HomeBrew | |
# BEGIN Fix touch while this is not closed | |
[ ! -f /usr/bin/touch ] && sudo ln /bin/touch /usr/bin/touch | |
# END Fix touch while this is not closed | |
CI=1 /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)" | |
test -d ~/.linuxbrew && eval $(~/.linuxbrew/bin/brew shellenv) | |
test -d /home/linuxbrew/.linuxbrew && eval $(/home/linuxbrew/.linuxbrew/bin/brew shellenv) | |
test -r ~/.bash_profile && echo "eval \$($(brew --prefix)/bin/brew shellenv)" >>~/.bash_profile | |
echo "eval \$($(brew --prefix)/bin/brew shellenv)" >>~/.profile |
If you have problems when downloading large results sets from reDash it may be you are running against the default 30 seconds timeout in gunicorn. To solve this you can edit your:
/opt/redash/supervisord/supervisord.conf
and change:
[program:redash_server]
command=/opt/redash/current/bin/run gunicorn -b 127.0.0.1:5000 --name redash -w 4 redash.wsgi:app
- Prerequisite: Install required Developer Libraries
sudo apt-get install libssl-dev libreadline-dev
-
Then, follow up this tutorial: https://www.anegron.site/2020/01/30/installing-rbenv-and-ruby-on-raspberry-pi/
-
Now activate the proper version for Homebrew - as of July 2020 the commands were:
# pip install awscurl | |
export COLLECTION_ID=j04odjdwa8f5xxxxxxxx | |
export OPENSEARCHHOST=`aws opensearchserverless batch-get-collection --ids ${COLLECTION_ID} | jq '.collectionDetails[] | .dashboardEndpoint'` | |
# Delete all indexes that follow a specific pattern | |
delete_old_indexes() { | |
# TARGETDATE should look like YYYY.MM.DD where date is 1 month before now. | |
export TARGETDATE=`date -d "-1 month" +"%Y.%m.%d"` | |
export INDEXLIST=$(awscurl --service aoss "${OPENSEARCHHOST}/_cat/indices" | grep ocsf | grep ${TARGETDATE} | awk '{print $1}') | |
echo "${INDEXLIST}" | while read index; do awscurl --service aoss -X DELETE "${OPENSEARCHHOST}/${index}"; done |
This is a customized snippet using Vega.
The original idea is from https://github.com/aws-solutions/centralized-logging-with-opensearch, but this is customised to consume OCSF logs injected into Security Lake
Some tips:
- To debug Vega scripts, you can use
VEGA_DEBUG.view.data('rawData')
into your browser console to retrieve the data in rawData (look at the beginning of the file above) - Not sure how to programatically inject this code, but if you need to create this in your own dashboard, you can add a new visualization as Vega, and copy and paste the code above.
In MacOS, you can do dig whatever.local
and get some results if you have the entry in a local DNS (like pi-hole) but curl, or browsing will fail.
This is because Apple enforces that .local domain is only discovered by the mDNS Bonjour service (more info)
To solve this, I decided to run the avahi-daemon in my local Raspberry-pi to publish additional services.
I decided to use the avahi-aliases project to simplify publishing more than one service on the same IP as the default avahi-daemon doesn't allow this at the moment
from datetime import datetime, timedelta | |
# Every day of the week (starting from tomorrow) for the past 50 weeks | |
now = datetime.now() + timedelta(days=1) | |
for i in range (50): | |
delta = timedelta(days=7*i) | |
print ('"{}"'.format((now-delta).strftime("%b %-d, %Y"))) | |
# Every first Monday of the year |
If you depend on the Notes App and don't have iCloud sync enabled you may find challenging to migrate notes from one machine to another or just backing it up.
As of today (2019-12) the following approach works (tested on High Sierra to Catalina):
# In the source machine
cd ~/Library/Group\ Containers/group.com.apple.notes
zip NoteStore-backup.zip NoteStore.sqlite*
cd ~/Library/Containers/com.apple.Notes/Data/Library/Notes/
zip NotesV7-backup.zip NotesV7*
# Assumes you are capturing the output of your golang app panic into /tmp/crash | |
# Reason of the panic | |
head -3 /tmp/crash | |
# Register status (22 may change in different architectures) | |
tail -22 /tmp/crash | |
# Number of goroutines | |
cat /tmp/crash | grep goroutine | wc -l |