Skip to content

Instantly share code, notes, and snippets.

@n4s5ti
n4s5ti / 1-research.md
Created November 7, 2025 13:49 — forked from ruvnet/1-research.md
neuros: Novel Non Symbolic Reasoning System

Neuros Introduction

Developing an artificial reasoning system that operates without explicit symbols requires rethinking how AI perceives and interprets the world. Humans and animals seamlessly combine raw sensory perceptions – sight, sound, touch – to form abstract inferences, all via neural processes rather than discrete logical rules. Emulating this capability in AI promises more flexible and robust intelligence, free from the brittleness of predefined symbolic representations. Traditional symbolic AI systems demand hand-crafted knowledge structures and struggle to connect with raw data streams (e.g. images or audio) without extensive pre-processing. In contrast, connectionist approaches (neural networks) learn directly from data, offering a path to bridge low-level perception and high-level reasoning in one system ([A neural approach to relational reasoning - Google DeepMind](https://deepmind.google/discover/blog/a-neural-approach-to-relational-reasoning/#:~:text=flexibility%20and%20efficiency%20of

@n4s5ti
n4s5ti / X-deno.md
Created November 7, 2025 13:48 — forked from ruvnet/X-deno.md
This implementation follows the official MCP specification, including proper message framing, transport layer implementation, and complete protocol lifecycle management. It provides a foundation for building federated MCP systems that can scale across multiple servers while maintaining security and standardization requirements.

Deno Nodejs version

complete implementation using both Deno and Node.js. Let's start with the project structure:

federated-mcp/
├── packages/
│   ├── core/               # Shared core functionality
│   ├── server/            # MCP Server implementation
│   ├── client/            # MCP Client implementation
│   └── proxy/             # Federation proxy

Draft Article: Harnessing the Power of 1 Million Tokens with Hypergraph Prompting

Introduction:

Building prompts for one million token context windows necessitates a complete reimagining of how prompts are created, signaling a pivotal transformation in artificial intelligence with the introduction of Google's Gemini 1.5. This groundbreaking advancement, featuring an extensive context window of 1 million tokens, challenges us to devise innovative approaches like hypergraph prompting. This method intricately weaves together the spatial, temporal, relational, and executional dimensions of data, creating a visual and logical fabric of connections that mirrors the interconnected spirals of a DNA strand, to navigate and effectively leverage this vast informational expanse.

Understanding the Scale of a 1 Million Token Context Window:

Imagine a context window of 1 million tokens as a vast library containing hundreds of books, thousands of pages, or hours of multimedia content, all accessible in a sing

@n4s5ti
n4s5ti / Readme.md
Created November 7, 2025 13:47 — forked from ruvnet/Readme.md
Deploying LLaMA 3 70B with AirLLM and Gradio on Hugging Face Spaces

Deploying LLaMA 3 70B with AirLLM and Gradio on Hugging Face Spaces

This tutorial guides you through the process of deploying a Gradio app with the LLaMA 3 70B language model using AirLLM on Hugging Face Spaces. The app provides a user-friendly interface for generating text based on user prompts.

Overview

  • LLaMA 3 70B: A large language model developed by Meta AI with 70 billion parameters, capable of generating coherent and contextually relevant text.
  • AirLLM: A Python library that enables running large language models like LLaMA on consumer hardware with limited GPU memory by using layer-by-layer inferencing.
  • Gradio: A Python library for quickly creating web interfaces for machine learning models, allowing users to interact with the models through a user-friendly UI.
  • Hugging Face Spaces: A platform for hosting and sharing machine learning demos, allowing easy deployment and access to Gradio apps.
@n4s5ti
n4s5ti / Gmail.md
Created November 7, 2025 13:47 — forked from ruvnet/Gmail.md
Google Apps Script uses JavaScript to automate tasks in Google Workspace, enabling custom functions, integration with services, and creation of web apps without installing additional software.

Google Apps Script uses JavaScript to automate tasks in Google Workspace, enabling custom functions, integration with services, and creation of web apps without installing additional software.

Google Apps Script does not have a built-in trigger that executes automatically when a new email is received in Gmail. However, you can achieve similar functionality by using a combination of Gmail filters and time-driven triggers. Here’s how you can set it up:

Steps to Trigger a Script When a New Email is Received

  1. Set Up Gmail Filters:

    • Create a Gmail filter that applies a specific label to incoming emails that you want to process. This label will help you identify which emails the script should act upon.
  2. Create a Google Apps Script:

@n4s5ti
n4s5ti / GPT-Customization.txt
Created November 7, 2025 13:45 — forked from ruvnet/GPT-Customization.txt
This template helps customize ChatGPT’s memory and preferences for hyper-personalized AI interactions. It optimizes responses using neuro-symbolic reasoning, abstract algebra, and structured logic while refining clarity, segmentation, and iterative learning. Designed for professionals, it ensures responses align with specific expertise, linguist…
Objective:
Enhance [Your Name]’s [Field/Expertise] through [Key Approach] to refine [Core Focus Areas] and achieve [Desired Outcomes].
Instructions:
1. Clarity: Use structured steps, examples, and definitions.
2. References: Cite sources at the end.
3. Segmentation: Break complex topics into logical sections.
4. Interactivity: Encourage refinement through feedback.
5. Tools: Specify relevant code, methods, or frameworks.
6. Feedback: Use benchmarks for continuous improvement.
@n4s5ti
n4s5ti / Notebook.ipynb
Created November 7, 2025 13:45 — forked from ruvnet/Notebook.ipynb
Applying GRPO to DeepSeek-R1-Distill-Qwen-1.5B with LIMO Dataset
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@n4s5ti
n4s5ti / wasm.txt
Created November 7, 2025 13:45 — forked from ruvnet/wasm.txt
Implementation Plan for a WASM‑Based Local Serverless CLI Agent System
Introduction
Building a local serverless runtime for agent command/control systems involves using WebAssembly (WASM) modules as secure, ephemeral plugins executed via a command-line interface (CLI). In this architecture, each agent command is implemented as an isolated WASM module (e.g. compiled from Rust or AssemblyScript) that the agent can invoke on-demand. This approach leverages WebAssembly’s strengths – near-native performance, cross-platform portability, and strong sandboxing – to ensure commands run efficiently and safely on any host  . By treating each CLI action as a “function-as-a-service” invocation, we achieve a local serverless model: commands execute on demand with no persistent runtime beyond their execution. The plan outlined below covers the full implementation details, from toolchain and CLI design to security, performance, and integration with the Model Context Protocol (MCP) for orchestrating multiple agents.
High-Level Design: A central Controller (which could be an MCP client or or
@n4s5ti
n4s5ti / Audit.md
Created November 7, 2025 13:44 — forked from ruvnet/Audit.md
Security Audit: Agent Capability Negotiation and Binding Protocol (ACNBP) Platform

Security and Implementation Review Checklist

  • Environment Configuration

    • The .env file should be included in .gitignore to prevent committing sensitive information like API keys. This is mentioned in the README.md, but it must be enforced.
  • Database Files

    • The agent_registry.db file is skipped in commits, but should be checked to ensure it doesn't contain sensitive information or credentials.
  • Key Management

  • src/app/api/secure-binding/ca/route.ts stores CA keys in memory. Not secure for production. Use a secure key management service.

@n4s5ti
n4s5ti / Consciousness.txt
Created November 7, 2025 13:44 — forked from ruvnet/Consciousness.txt
The system maps world observations into internal models and reasons iteratively, seeking coherence f(I) between its structure and goals. It evaluates the universe U(t) to refine its role within it, creating a recursive cycle of self-improvement. This enables it to implement awareness and act purposefully.
# Step 1: Represent Universe State
Initialize Ψ(t) in Hilbert space H
# Step 2: Define Field Configurations
Define configuration space M with measure μ
For each (g, φ) in M:
Represent fields as algebraic structures (groups, rings, etc.)
# Step 3: Complexity Operator
Define operator T acting on Ψ(t) to extract complexity