Skip to content

Instantly share code, notes, and snippets.

Avatar
💭
:cheeeeeese:

Royce Williams roycewilliams

💭
:cheeeeeese:
View GitHub Profile
@JburkeRSAC
JburkeRSAC / urxvt_notes.txt
Created Oct 18, 2015
Gentoo URXVT clipboard buffer notes
View urxvt_notes.txt
!$$$DESCRIPTION$$$:
!**ctrl+alt+cmd for urxvt buffer ==> normal text buffer
!$$$USAGE$$$
!copy from urxvt:
!ctrl+alt+cmd+c
!paste into urxvt:
!ctrl+alt+cmd+v
!Add to ~/.Xdefaults:
!$$$BEGIN$$$
XTerm*transparent: true
@FauxFaux
FauxFaux / chrome2ca.sh
Created Dec 30, 2015
Capture CAs from Chrome history
View chrome2ca.sh
locate -r '/History$' | fgrep chrom | while read x; do echo select url from urls\; | sqlite3 "$x"; done > hist
cut -d/ -f 3 hist | sort -u | xargs -P200 -I{} -n1 -- sh -c ': | openssl s_client -connect {}:443 2> {}.path > {}.handshake'
for f in *.path; do if ! fgrep 'verify erro' $f >/dev/null; then grep -m1 '^depth' $f; fi; done | cut -d' ' -f 2- | sort | uniq -c | sort -n
for f in *.path; do if ! fgrep 'verify erro' $f >/dev/null; then grep -m1 '^depth' $f; fi; done | cut -d' ' -f 2- | sed 's/.*O = //;s/, OU =.*//;s/, CN = //;s/The //;s/[",.]//g;s/ Inc//' | sort | uniq -c | sort -n
@syzdek
syzdek / locky-dga.c
Last active Feb 25, 2016
Locky Ransomware Domain Generation Algorithm
View locky-dga.c
/*
* Locky Ransomware Domain Generation Algorithm
*
* Original code from Forcepoint Security Labs:
* https://blogs.forcepoint.com/security-labs/locky-ransomware-encrypts-documents-databases-code-bitcoin-wallets-and-more
*
* Code updated by David M. Syzdek <ten . kedzys @ divad> on 2016/02/24
*
* Compile with:
* gcc -W -Wall -Werror -o locky-dga locky-dga.c
View gist:66027c3399a6cea4eff35dd7247c6b60
#!/bin/bash
if [ -z "$1" ]; then
keyspace=34359738368
else
keyspace=$1
fi
echo "Keyspace: $keyspace"
View hash-or-encrypt.md

Via Twitter

Authors consider SQLi as main attack vector. Hashed token mitigate r/o SQLi, encrypted mitigate r/w SQLi

That actually doesn't buy you anything. Consider the following table schema:

CREATE TABLE reset_tokens (
    tokenid BIGSERIAL PRIMARY KEY,
    selector TEXT,
@evilmog
evilmog / netntlmv1.py
Last active Apr 25, 2017
netntlmv1 prototype
View netntlmv1.py
import platform
import subprocess
import os
hash_input = raw_input("Please enter hash: ")
if not hash_input:
hash_input="johndoe::test-domain:1FA1B9C4ED8E570200000000000000000000000000000000:1B91B89CC1A7417DF9CFAC47CCDED2B77D01513435B36DCA:1122334455667788"
h_user, h_blank, h_domain, h_hash1, h_hash2, h_challenge = hash_input.split(':')
View SecondValidationMethod.md

So, for validation, here is my suggestion. Works for however people want to submit (separate hashes.txt, password.txt or combined "hash:pass" file).

  • grab a copy of mdxfind/mdsplit from http://hashes.org . Ensure you have the contest-orginal pcrack.master.hashed.txt file (it will be read-only).

  • If separate files:

     mdxfind -f pcrack.master.hashed.txt -h ^sha1$ plaintext.txt >result
     mdsplit -f result hashes.txt

You will be left with hashes.txt and hashes.SHA1x01. The hashes.SHA1x01 are the validated number of cracks (wc -l to get number), and any invalid hash submissions will be in hashes.txt. If this file is empty, all hashes validated.

@miketweaver
miketweaver / mangler.py
Created Oct 16, 2017
PasswordCTF.com Mangler
View mangler.py
import random
import os
import sys
import hashlib
import thread
leetrandomness = 1;
temppassword = ""
use = False
@lakiw
lakiw / gist:64d1a93106fd501d4d680fffad076e12
Created Nov 2, 2017
Proposed approach to multi word detection in password cracking
View gist:64d1a93106fd501d4d680fffad076e12
The main challenge for detecting multi-words for passwords for me has been the lack of good wordlists/dictionaries.
Based on previous experience, my rule of thumb is that a "decent" dictionary will have about a 60% coverage rate for the training set. That number is based on very out of date experiments which quite honestly I need to update, (if you are curious I can look up where in my dissertation I documented them), which is why I consider it more a rule of thumb vs an accurate statement. You can get a higher coverage by increasing the size of your dictionary but at that point the amount of junk in your wordlist starts to make Markov based brute-force sound more attractive. Still, while some people might quible with that 60% coverage statement, (rightfully so), I think it highlights the wordlist issue. If I look for multi-words but the "golden list" I use in training only has 60% coverage then this becomes a harder problem to solve.
In general it seems like a better approach is to build custom dictionarie
@Hydraze
Hydraze / gist:372e221ef52ce8ddc6b5ba2108f2251f
Created Dec 12, 2017
PACK run on the 1.4 billion passwords ("BreachCompilation")
View gist:372e221ef52ce8ddc6b5ba2108f2251f
_
StatsGen 0.0.3 | |
_ __ __ _ ___| | _
| '_ \ / _` |/ __| |/ /
| |_) | (_| | (__| <
| .__/ \__,_|\___|_|\_\
| |
|_| iphelix@thesprawl.org