Skip to content

Instantly share code, notes, and snippets.

@n4s5ti
n4s5ti / 1-research.md
Created November 7, 2025 16:10 — forked from ruvnet/1-research.md
AI Manipulation Defense System

AI Manipulation Defense System: Comprehensive Development Plan

The AI Manipulation Defense System (AIMDS) is a production-ready framework built to safeguard AI models, APIs, and agentic infrastructures from adversarial manipulation, prompt injection, data leakage, and jailbreaking attempts. It’s designed for organizations deploying autonomous agents, LLM APIs, or hybrid reasoning systems that demand both speed and security.

Application

AIMDS integrates directly into AI pipelines—before or after model inference—to detect and neutralize malicious inputs. It’s ideal for:

  • Enterprise AI gateways securing LLM APIs.
  • Government and defense AI deployments requiring verified integrity.
  • Developers embedding guardrails within autonomous agents and chatbots.
@n4s5ti
n4s5ti / quic.md
Created November 7, 2025 16:09 — forked from ruvnet/quic.md
Agentic Flow 1.6.4 + QUIC: Transform the internet into a multi-threaded reasoning fabric with a few CLI commands

🚀 Agentic Flow 1.6.4 + QUIC: Make Your Network Think

Transform the internet into a multi-threaded reasoning fabric with a few CLI commands

🌐 Introduction: When Networks Become Intelligent

What if the internet could think? Not the apps at the edge, but the transport that ties them together. That is the premise of Agentic Flow 1.6.4 with QUIC: embed intelligence in the very pathways packets travel so reasoning is no longer a layer above the network, it is fused into the flow itself.

QUIC matters because TCP is a relic of a page-and-file era. TCP sequences bytes, blocks on loss, and restarts fragile handshakes whenever the path changes. QUIC was designed to fix those limitations. Originating at Google and standardized by the IETF as RFC 9000, QUIC runs over UDP, encrypts by default with TLS 1.3, and lets a single connection carry hundreds of independent streams. It resumes instantly with 0-RTT for returning peers and it migrates across networks without breaking session identity. In practice, this tur

@n4s5ti
n4s5ti / q.py
Created November 7, 2025 16:09 — forked from ruvnet/q.py
Q* (Q-Star)
# - Q* (Q-Star)
# /\__/\ - q.py
# ( o.o ) - v0.0.1
# >^< - by @rUv
# 01110010 01110101 01110110
# This is a proof of concept implementation of the Q* (AGI) leak from OpenAi
# This Python code defines a sophisticated Q-learning agent for reinforcement learning.
# It includes dynamic exploration, learning from experiences, and checks for convergence.
# The agent's capabilities are refined iteratively to optimize its decision-making strategy in a given environment.

Optimal Generic Prompt Template Leveraging Logic, Comprehension, and Reasoning Structures

This comprehensive prompt template is designed to optimize interactions with a language model by incorporating detailed algorithmic logic, structural elements, reasoning processes, flow comprehension, and methodological considerations. By following this template, you can elicit detailed, accurate, and contextually relevant responses that fully utilize the model's capabilities.


Template Overview

  1. Contextual Background
  2. Clear Instruction of Task
@n4s5ti
n4s5ti / subliminal.md
Created November 7, 2025 14:26 — forked from ruvnet/subliminal.md
A detailed algorithm and implementation framework for the 25th frame subliminal message technique:

A detailed algorithm and implementation framework for the 25th frame subliminal message technique:

The 25th Frame Subliminal Algorithm is a technique for embedding subliminal messages into video content based on human visual perception limitations. Here's a brief overview:

Core Concept

The algorithm works by inserting an additional frame containing a subliminal message into a standard video sequence that typically runs at 24 frames per second[1]. This extra frame is displayed for less than 50 milliseconds, making it imperceptible to conscious awareness while still being processed by the subconscious mind[3].

Key Benefits

  • Subconscious Processing: The inserted frames bypass conscious perception while still being registered by the brain[1][2].
  • Measurable Impact: Studies have shown physiological responses through EEG measurements, pulse monitoring, and blood oxygen levels when exposed to these frames[1].
@n4s5ti
n4s5ti / captcha-bot.md
Created November 7, 2025 14:25 — forked from ruvnet/captcha-bot.md
automate the process of solving CAPTCHAs using a desktop computer, you can combine several approaches involving automation tools

To automate the process of solving CAPTCHAs using a desktop computer, you can combine several approaches involving automation tools, CAPTCHA solving services, and scripting. Here’s a detailed guide on how to set this up:

Using Automation Tools

Power Automate Desktop

You can use Power Automate Desktop (formerly Microsoft Power Automate Desktop) to automate the interaction with the CAPTCHA. Here’s a general outline based on the videos and descriptions provided:

  1. Capture CAPTCHA Image:
    • Use Power Automate Desktop to capture a screenshot of the CAPTCHA challenge.
  • Extract the CAPTCHA image from the screenshot[1][4].
@n4s5ti
n4s5ti / Prediciton.md
Created November 7, 2025 14:22 — forked from ruvnet/Prediciton.md
A prediction framework & prompt uses a "future retrospective" approach where predictions are framed as historical analysis from a future date. This method has proven particularly effective for economic indicators, market trends, and event outcomes when combined with rigorous backtesting and statistical validation

Predictive Narrative Framework & Prompt

This framework leverages research from Baylor University showing that language models achieve significantly higher accuracy when making predictions through narrative storytelling rather than direct forecasting.

Research

How to structure prompts using the narrative approach that proved more successful than direct prediction:

Direct Prompt (Less Effective)

@n4s5ti
n4s5ti / polymorphic-obfuscation.md
Created November 7, 2025 14:22 — forked from ruvnet/polymorphic-obfuscation.md
This example demonstrates basic polymorphic obfuscation techniques, including encryption, variable code structure, and behavioral adaptation.

Polymorphic obfuscation algorithm involves several complex techniques to ensure the code changes its form each time it is executed, while maintaining its original functionality. Here’s a detailed overview and an example of how you might implement such an algorithm.

Key Techniques in Polymorphic Obfuscation

  1. Code Obfuscation:

    • Use encryption, compression, or other obfuscation methods to conceal the code's true nature.
    • Example: Encrypt the main body of the code and add a decryption function that decrypts the code before execution[3].
  2. Dynamic Encryption Keys:

  • Use different encryption keys for each new instance of the code.
@n4s5ti
n4s5ti / power-automate.md
Created November 7, 2025 14:22 — forked from ruvnet/power-automate.md
This project integrates OpenAI's GPT-4o large language model with Power Automate Desktop to create an advanced AI-powered automation system. It uses real-time streaming via WebSockets to enable the AI to observe and interact with your desktop, allowing for dynamic and intelligent automation of tasks.

Introduction:

This project integrates OpenAI's GPT-4o large language model with Power Automate Desktop to create an advanced AI-powered automation system. It uses real-time streaming via WebSockets to enable the AI to observe and interact with your desktop, allowing for dynamic and intelligent automation of tasks.

Key Features:

  • Real-time desktop streaming to GPT-4o via WebSockets
  • AI-powered analysis and decision making for desktop automation
  • Easy setup with guided installation script
  • Customizable automation actions and workflows
  • Seamless integration with Power Automate Desktop
@n4s5ti
n4s5ti / implementation.md
Created November 7, 2025 14:21 — forked from ruvnet/implementation.md
Test-Time Training is an innovative approach in machine learning that allows models to adapt and improve their performance during the testing phase

Basic PyTorch implementation of Llama 3.2:

import torch
import torch.nn as nn
import torch.nn.functional as F
import math

class RMSNorm(nn.Module):
    def __init__(self, dim, eps=1e-6):