Skip to content

Instantly share code, notes, and snippets.

@mistificator
mistificator / VkGetLink.user.js
Last active May 17, 2026 01:53
Script for downloading audio and video files from the vk.com
@test482
test482 / clash-meta-config.yaml
Last active May 17, 2026 01:38
Clash Meta config
# Meta JSON Schema: A linter and snippet provider for Clash.Meta(mihomo) configuration.
log-level: info
external-controller: 127.0.0.1:9090
# external-ui: /usr/share/metacubexd/
# secret: "在此修改访问令牌"
mode: rule
mixed-port: 7890
allow-lan: false

Flutter Clean Architecture Implementation Guide

Overview

This document provides comprehensive guidelines for implementing a Flutter project following Clean Architecture principles. The project structure follows a modular approach with clear separation of concerns, making the codebase maintainable, testable, and scalable.

lib/
├── core/
│   ├── database/             

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

@AmberMarie4444
AmberMarie4444 / add-peers.sh
Created May 4, 2026 08:47 — forked from meyer9/add-peers.sh
NETWORK=mainnet ./add-peers.sh
#!/usr/bin/env bash
set -euo pipefail
RPC_URL="${RPC_URL:-http://localhost:8545}"
NETWORK="${NETWORK:-}"
SEPOLIA_ENODES=(
"enode://45a06f4fc806768de7615174b9c8258db64202e6443418a3d8febea9362ba694272583e2c82a9755f9d450bf3b39a2af3336da579faf37deeb5bf724d8d19f5e@3.229.122.30:30303?discport=30304"
"enode://b307ebfb325c0ebf14e169447f426b931307e14889f1ecc0111686c28cbeaf937a2455d06cd278222edf385f1c855b0f824b2adde748efec53dd66963656aa14@44.201.35.121:30303?discport=30304"
"enode://a4805cc020882d9b79dc394529d3287009eee5cfb0efdac17fb7a54a2f646208f6f3de2bfd3405c4884908fb7a19c41ecfa62e63318d2f7cf434f397bf48e607@34.235.124.239:30303?discport=30304"
@safa-dayo
safa-dayo / LTX-Video_ComfyUI.sh
Last active May 17, 2026 00:04
LTX-VideoをComfyUIで試すためのGoogle Colabノートブック
# #@title Environment Setup
### setup ComfyUI ###
from pathlib import Path
OPTIONS = {}
USE_GOOGLE_DRIVE = False #@param {type:"boolean"}
UPDATE_COMFY_UI = True #@param {type:"boolean"}
USE_COMFYUI_MANAGER = True #@param {type:"boolean"}
@LessUp
LessUp / glm-coding-plan-rush-helper.user.js
Last active May 17, 2026 00:03
⚡ GLM Coding Rush — 智谱编程助手一键抢购脚本 | Auto-Purchase Userscript for GLM Coding | 自动解锁售罄 · 高速重试 · 定时触发 · 支付保护 · 中英双语面板 | Auto-unlock sold-out · High-speed retry · Scheduled trigger · Payment guard · Bilingual panel | Tampermonkey/Violentmonkey | 点击 Raw 安装 · Click Raw to install
// ==UserScript==
// @name GLM Coding Rush - 智谱编程助手抢购脚本
// @namespace https://gist.github.com/LessUp
// @version 1.1.0
// @description 智谱 GLM Coding 一键抢购脚本 — 自动解锁售罄按钮 / 高速重试引擎 / bizId 双重校验 / 错误弹窗自动恢复 / 支付弹窗保护 / 秒级定时触发 / 可拖拽浮动面板
// @author LessUp
// @match *://www.bigmodel.cn/*
// @match https://bigmodel.cn/glm-coding*
// @run-at document-start
// @grant none
@peppergrayxyz
peppergrayxyz / qemu-vulkan-virtio.md
Last active May 16, 2026 23:56
QEMU with VirtIO GPU Vulkan Support

QEMU with VirtIO GPU Vulkan Support

With its latest reales qemu added the Venus patches so that virtio-gpu now support venus encapsulation for vulkan. This is one more piece to the puzzle towards full Vulkan support.

An outdated blog post on clollabora described in 2021 how to enable 3D acceleration of Vulkan applications in QEMU through the Venus experimental Vulkan driver for VirtIO-GPU with a local development environment. Following up on the outdated write up, this is how its done today.

Definitions

Let's start with the brief description of the projects mentioned in the post & extend them: