Skip to content

Instantly share code, notes, and snippets.

View M0r13n's full-sized avatar
🦔

Leon Morten Richter M0r13n

🦔
View GitHub Profile
@M0r13n
M0r13n / tunny.py
Created February 9, 2025 13:32
Python script that demonstrates network namespace isolation and TUN device manipulation. The code creates a user namespace with root privileges and a network namespace, then sets up a TUN virtual network interface to handle ICMP (ping) traffic. It implements a basic IPv4 pseudo-gateway that responds to ICMP Echo Requests with Echo Replies.
#!/usr/bin/env python3
# sudo sysctl -w kernel.apparmor_restrict_unprivileged_unconfined=0
# sudo sysctl -w kernel.apparmor_restrict_unprivileged_userns=0
# unshare --user --map-user=0
from dataclasses import dataclass
import fcntl
import getpass
import multiprocessing
import os
@M0r13n
M0r13n / fullmatch.py
Created December 26, 2024 10:54
Backport of pathlibs `full_match` to Python 3.10+
import pathlib
import re
import os
import functools
def _translate(pat, STAR, QUESTION_MARK):
res = []
add = res.append
i, n = 0, len(pat)
while i < n:
@M0r13n
M0r13n / inotify.py
Last active January 26, 2025 21:30
Play around with the inotify C-API in Python
import ctypes
import select
import pathlib
from ctypes import c_char_p, c_int, c_uint32
import os
class EventStruct(ctypes.Structure):
@M0r13n
M0r13n / autostart.py
Last active December 24, 2024 12:13
Flask like auto reload of arbitrary executables/scripts on code change
#!/bin/env python3
"""This is a simple self-contained script to reload an arbitrary application
when its source changes (like using Flask in debug mode). It comes without
any 3rd party dependencies and can run standalone - a copy of this script is
all you need.
Examples:
./autostart.py -p '*.py' -i 'venv/*' flake8 autostart.py --max-line-length 120
./autostart.py -p '*.py' -i 'venv/*' mypy ./autostart.py
./autostart.py -p '*.py' -i 'venv/*' "$(which python3)" ./server.py
@M0r13n
M0r13n / nat.md
Created December 16, 2024 13:21
NAT Hole Punching using NetCat

NAT Traversal Setup with Netcat

This guide demonstrates a simple NAT traversal setup using tcpdump and nc (Netcat) for UDP communication.

Step 1: Monitor UDP Request on the Server

On the server, use tcpdump to monitor the incoming UDP packets on port 12345:

sudo tcpdump -i any udp and port 12345

@M0r13n
M0r13n / hole.md
Created December 16, 2024 12:37
How to punch a hole through a stateful firewall using UDP opening a reverse shell.

UDP Hole Punching

The following example demonstrates how to punch a hole through a stateful firewall using UDP. It opens a reverse shell on the server.

⚠️ Using reverse or bind shells can be highly insecure and potentially illegal if executed without authorization. Always ensure you have explicit permission before performing such actions in a network.

Assumptions

  • Server: The target machine on which the shell will be opened.
  • Client: The machine used to remotely connect to the shell.
@M0r13n
M0r13n / tracking.py
Created September 1, 2024 11:23
AIS: How to collect and maintain the state of individual vessels over time by keeping track of several messages using pyais
import pathlib
import pyais
from pyais.tracker import AISTrackEvent
def do_something(track):
# called every time an AISTrack is created or updated
print(track.mmsi, track)
@M0r13n
M0r13n / wc.py
Created June 21, 2024 11:49
word count like using `wc` using async state machine parsing. Inspired by: https://github.com/robertdavidgraham/wc2
WAS_SPACE = 0
NEW_LINE = 1
NEW_WORD = 2
WAS_WORD = 3
SPACES = [9,10,11,12,13,32]
NEWLINE = 10
def init_table():
# 0 => was space
FROM python:3.11-slim
WORKDIR /multi
COPY . .
CMD ["python", "./sender.py"]
@M0r13n
M0r13n / index.py
Last active February 1, 2024 10:13
llama_index local model
import torch
from llama_index.llms import HuggingFaceLLM
from llama_index.prompts import PromptTemplate
selected_model = 'mistralai/Mixtral-8x7B-Instruct-v0.1'
SYSTEM_PROMPT = """You are an AI assistant that answers questions in a friendly manner, based on the given source documents. Here are some rules you always follow:
- Generate human readable output, avoid creating output with gibberish text.
- Generate only the requested output, don't include any other language before or after the requested output.
- Never say thank you, that you are happy to help, that you are an AI agent, etc. Just answer directly.