Skip to content

Instantly share code, notes, and snippets.

View antoniomtz's full-sized avatar

Antonio Martinez antoniomtz

View GitHub Profile
@antoniomtz
antoniomtz / llm-wiki.md
Created April 14, 2026 21:38 — forked from karpathy/llm-wiki.md
llm-wiki

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

@antoniomtz
antoniomtz / run_yolov8_v9_vX_dlstreamer.md
Last active June 10, 2024 22:22
Run YOLOv8,9,X on Intel® DL Streamer 2024.0.2

Run YOLOv8,9,X on Intel® DL Streamer 2024.0.2

This guide details the steps to deploy the DL Streamer in a Docker container using a local webcam as the video input. Ensure your webcam is available at /dev/video0 before proceeding.

Pre-requisites

  • Docker installed on your machine.
  • Webcam connected and recognized as /dev/video0.

Launch the DL Streamer Docker Container