Skip to content

Instantly share code, notes, and snippets.

View peakBreaker's full-sized avatar
🐐

Anders L. Hurum peakBreaker

🐐
View GitHub Profile
@peakBreaker
peakBreaker / my_arch_install
Last active September 7, 2020 14:09
My entire arch install with harddrive encryption and basic setup before running PIES
################################ MY ARCH INSTALL #################################
# Official install guide: https://wiki.archlinux.org/index.php/installation_guide
#################### NOTE: Dual booting with Windows 10 UEFI: ####################
## - Use the existing EFI partition made by Windows instead of creating a new one
## - Partition up empty space for Linux install in Windows
## - Configure GRUB to choose between Windows or Arch boot
##################################################################################
# Pre Install:
## Download arch from https://www.archlinux.org/download/
## Flash to USB drive:
@peakBreaker
peakBreaker / MyArgs.sh
Last active October 11, 2018 08:26
Handle bash CLI args
#!/usr/bin/env bash
# GET ARGS
while getopts ":r:c:o:w:hs:hd" o; do case "${o}" in
h)
echo -e "Optional arguments for custom use:"
echo -e " -r: Repository (local file or url)"
echo -e " -c: Config file"
echo -e " -o: Output file"
echo -e " -w: Worker program to call"
@peakBreaker
peakBreaker / cuphead.sh
Created October 15, 2018 14:05
Cups setup
# Program to enable avahi service discovery of printers and CUPS
# In Arch Linux
#
## First install the needed programs
sudo pacman -S cups nss-mdns
## Add user to cups group
sudo usermod -a -G cups <USER>
## Alter the hosts line to be like this
# hosts: ... mdns_minimal [NOTFOUND=return] resolve [!UNAVAIL=return] dns ...
@peakBreaker
peakBreaker / PageTable.tex
Created October 17, 2018 13:59
FullpageTable
%%
%% Full page table test
%%
\documentclass[12pt]{article}
\pagenumbering{gobble}
\usepackage{tabularx}
\newcolumntype{C}{>{\hsize=.5\hsize}X}
\usepackage[left=1cm, right=1cm, top=1cm, bottom=1cm]{geometry}
@peakBreaker
peakBreaker / daterange.py
Last active November 26, 2018 13:00
Iterating through historical data until today
import datetime
# Set the config for the date iterator
start_date = datetime.datetime(2018, 9, 10)
end_date = datetime.datetime(2018, 11, 20)
d = start_date
delta = datetime.timedelta(days=1)
# Iterate from start date, adding delta with every iteration
while d <= end_date:
@peakBreaker
peakBreaker / get_filename.py
Last active November 26, 2018 12:48
Gets the filename, no extention from the script running this code.
# Get filename, whcih can be used for script identification
filename_no_ext = path.splitext(path.basename(__file__))[0]
@peakBreaker
peakBreaker / analyze.md
Last active February 7, 2020 17:00
CodeAnalysis in Python

Based on talk by James Powell - https://www.youtube.com/watch?v=mr2SE_drU5o

Static Analysis

  1. cloc
  2. find -iname '.' | xargs cat |sed -e 's/^[ \t]//' | sort | uniq -c | sort -nr
  3. Python:
from subprocess import check_output
files = check_output('find -iname *.<type>'.split())\
@peakBreaker
peakBreaker / statutils.py
Last active June 11, 2019 13:18
ecdf, correlation, bootstrapping
def ecdf(data):
"""Compute ECDF for a one-dimensional array of measurements.
Very useful for graphical EDA
"""
# Number of data points: n
n = len(data)
# x-data for the ECDF: x
x = np.sort(data)
@peakBreaker
peakBreaker / postproc_scikit_sample.py
Last active June 19, 2019 08:30
Postprocessing multiple scikit learn models probabilities and predictions to a multilevel dataframe
# Pred and prob arrays are numpy array outputs from a sklearn model:
# - pred_array = model.predict(X).astype(int)
# - prob_arr = model.predict_proba(X)
#
# Here we run the inital data through multiple models and structure the
# model output into a multilevel dataframe for probabilities and predictions
#
# Typically the next stage would be to enhance the labels of numerical results
# to string/categories or similar basaed on whatever we want, aswell as providing
# the results to a database or something like that
@peakBreaker
peakBreaker / mytsne.py
Created July 7, 2019 15:09
Running TSNE
# Import TSNE
from sklearn.manifold import TSNE
def run_tsne(samples):
# Create a TSNE instance: model
model = TSNE(learning_rate=200)
# Apply fit_transform to samples: tsne_features
tsne_features = model.fit_transform(samples)