Skip to content

Instantly share code, notes, and snippets.

@baobabprince
baobabprince / DB38.340_microbiome_data.csv
Created May 17, 2026 07:31
Microbiome data for sample: DB38.340 . ASV column represent the sequence of specific bacteria. than there is the Taxonomic levels names. and the prevelance of specific bacteria in your gut and in the healthy population in israel. For further information about each bacteria, the last column contains a link to dbBact, showing information where the…
We can make this file beautiful and searchable if this error is corrected: Unclosed quoted field in line 3.
"Kingdom","Phyla","Class","Order","Family","Genus","Species","You","Population","ASV","link"
"k__Bacteria"," p__Bacteroidetes"," c__Bacteroidia"," o__Bacteroidales"," f__Prevotellaceae"," g__Prevotella"," s__copri",0.157304534742185,0.130254218770029,"TACGGAAGGTCCGGGCGTTATCCGGATTTATTGGGTTTAAAGGGAGCGTAGGCCGGAGATTAAGCGTGTTGTGAAATGTAGACGCTCAACGTCTGCACTGCAGCGCGAACTGGTTTCCTTGAGTACGCACAAAGTGGGCGGAATTCGTGG","http://dbbact.org/search_results?sequence=TACGGAAGGTCCGGGCGTTATCCGGATTTATTGGGTTTAAAGGGAGCGTAGGCCGGAGATTAAGCGTGTTGTGAAATGTAGACGCTCAACGTCTGCACTGCAGCGCGAACTGGTTTCCTTGAGTACGCACAAAGTGGGCGGAATTCGTGG"
"k__Bacteria"," p__Verrucomicrobia"," c__Verrucomicrobiae"," o__Verrucomicrobiales"," f__Verrucomicrobiaceae"," g__Akkermansia"," s__muciniphila",0.120969621630021,0.0307418368171763,"TACAGAGGTCTCAAGCGTTGTTCGGAATCACTGGGCGTAAAGCGTGCGTAGGCTGTTTCGTAAGTCGTGTGTGAAAGGCGCGGGCTCAACCCGCGGACGGCACATGATACTGCGAGACTAGAGTAATGGAGGGGGAACCGGAATTCTCGG","http://dbbact.org/search_results?sequence=TACAGAGGTCTCAAGCGTTGTTCGGAATCACTGGGCGTAAAGCGTG
id station stationid value unit obstime date
0 流浮山 RF001 0 mm 2026-05-17T15:15:00+08:00 2026-05-17
1 湿地公园 RF002 0 mm 2026-05-17T15:15:00+08:00 2026-05-17
2 水边围 N12 0 mm 2026-05-17T15:15:00+08:00 2026-05-17
3 石岗 RF003 0 mm 2026-05-17T15:15:00+08:00 2026-05-17
4 大美督 RF004 0 mm 2026-05-17T15:15:00+08:00 2026-05-17
5 大埔墟 RF005 0 mm 2026-05-17T15:15:00+08:00 2026-05-17
6 北潭涌 RF006 0 mm 2026-05-17T15:15:00+08:00 2026-05-17
7 滘西洲 RF007 0 mm 2026-05-17T15:15:00+08:00 2026-05-17
8 西贡 N15 0 mm 2026-05-17T15:15:00+08:00 2026-05-17
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>When 'i' says /y/ - Phonics Game</title>
<style>
:root {
--bg-dark: #1e1b4b;
--panel-bg: #ffffff;
requests
beautifulsoup4
openai
twilio
May 16 08:53:26 syncthing[2619]: Lost device connection XXX
May 16 08:53:26 syncthing[2619]: Lost device connection XXX
May 16 08:53:26 syncthing[2619]: Connection closed XXX
May 16 08:53:26 syncthing[2619]: Lost device connection XXX
May 16 08:55:10 tailscaled[888]: netcheck: DetectCaptivePortal(found=false)
May 16 09:00:16 tailscaled[888]: netcheck: DetectCaptivePortal(found=false)
May 16 09:05:34 tailscaled[888]: netcheck: DetectCaptivePortal(found=false)
May 16 09:06:53 kernel: BUG: spinlock recursion on CPU#3, swapper/3/0
May 16 09:06:53 kernel: lock: 0xffff000f53ec0900, .magic: dead4ead, .owner: swapper/3/0, .owner_cpu: 3
May 16 09:06:53 tailscaled[888]: LinkChange: major, rebinding: old: XXX
@ChinmayKuJena
ChinmayKuJena / postgresql-jsonb-indexing-tips.md
Created May 17, 2026 07:30
Learn why a plain JSONB column can become a performance bottleneck and how a GIN index with expression can make your document queries lightning‑fast.

Insight

Storing semi‑structured data in a jsonb column is convenient, but without an index every query forces a full table scan. The common myth is that a generic GIN index on the whole column solves all cases; in reality, you should index the exact paths you query. An expression index on a specific key (or a combination of keys) lets PostgreSQL use the index efficiently while keeping the flexibility of JSONB.

Example

-- Table with a JSONB payload
CREATE TABLE events (
    id SERIAL PRIMARY KEY,
    payload JSONB NOT NULL,
 created_at TIMESTAMP DEFAULT now()

llama-server + OpenCode ローカル推論セットアップ手順

1. 前提値

MODEL_ALIAS=qwen3-coder-local
BASE_URL=http://127.0.0.1:8080/v1
API_KEY=local
CONTEXT_32K=32768
CONTEXT_64K=65536
file:///C:/Users/Emma.Ryan/Downloads/gemini-code-1778999996559.html
@saifullahiummar0-art
saifullahiummar0-art / gist:7583c15c461245af20a81ebb275f0e2d
Created May 17, 2026 07:29
Malwarebytes Premium 5.1.1.106 Crack with Keygen [Latest 2024]
Malwarebytes Premium Crack
Malwarebytes Premium Crack
Malwarebytes Premium Crack is an impressive application that will protect you from various malware threats such as viruses, Trojans, rootkits and various other online threats. All of us want our PCs to be malware-free, because infected PCs may damage or lose documents, applications and settings.
Malwarebytes Premium License Key free for lifetime has a very user-friendly and intuitive user interface. All you need to do is specify the type of scan you need to perform. If you are eager to analyze some folders, you can perform a custom scan. You can also customize the behavior of the application when it detects an infected item, which can be treated as malware and ignore or warn you.
Malwarebytes Cracked Download Full Version
Malwarebytes Full Version with Crack from a simple malware removal tool into something akin to a superior antivirus solution. You get real-time protection, automatic protection against malicious websites, some extra controls to remove
@y-haonan
y-haonan / llm-wiki.md
Created May 17, 2026 07:29 — forked from karpathy/llm-wiki.md
llm-wiki

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.