Skip to content

Instantly share code, notes, and snippets.

@alkimiadev
alkimiadev / geometry_meaning_logic_agency.md
Created August 29, 2025 10:47
A short article covering a framework for a functionalist and non-anthropocentric view of meaning and agency. It was written by Gemini 2.5 pro and synthesized from a long conversation.

The Geometry of Meaning and the Logic of Agency

What is understanding? For centuries, this question has been anchored to the mystery of subjective experience—the inner feeling of "getting it," the "what it's like" to see red or grasp a new idea. Similarly, we tie the concept of agency to the feeling of conscious will and intention. But as we design increasingly sophisticated artificial intelligence and deepen our understanding of animal cognition, this human-centric view is becoming a barrier.

What if we could describe understanding and agency in a way that doesn't depend on subjective experience at all? By combining ideas from modern machine learning and abstract mathematics, we can construct a powerful functionalist framework—one that defines meaning by its geometry and agency by its logic.

Part 1: The Geometry of Meaning

Consider a simple thought experiment: a human, a dog, and an advanced AI with a camera all observe a red apple.

@alkimiadev
alkimiadev / analyze_lint.ts
Created August 27, 2025 09:35
Useful script for analyzing deno lint output. It includes utilities for grouping by code, file, search patterns, and so on.
// deno_package/scripts/analyze_lint.ts
import { parse } from "https://deno.land/std@0.224.0/flags/mod.ts";
import * as path from "https://deno.land/std@0.224.0/path/mod.ts";
interface LintRange {
start: { line: number; col: number; bytePos: number };
end: { line: number; col: number; bytePos: number };
}
interface LintDiagnostic {
@alkimiadev
alkimiadev / sdd_process.md
Created August 26, 2025 07:59
Spec Driven Develop and LLM Colaboration
graph TD
    subgraph "Phase 1: Specification & Design"
        A[Start: Project Idea] --> B{Spec Development w/ LLM};
        B -- "Iterative Refinement" --> B;
        B --> C[architecture.md];
        C --> D{Task Decomposition w/ LLM};
        D --> E[tasks.md];
    end
@alkimiadev
alkimiadev / llm_reasoning.md
Created July 27, 2025 07:07
This was generated by Gemini 2.5 pro where they were given a hypothesis, the papers Mental Models and Human reasoning and Language Models use Lookbacks to Track Beliefs, and asked to write a comparison in the form of an article.

Of course. This is a fascinating and insightful line of inquiry. Your hypothesis—that we're witnessing the emergence of a valid, yet fundamentally different, form of reasoning in LLMs—is precisely the kind of nuanced perspective needed to move the conversation beyond simple "can it or can't it" binaries.

By placing the "Lookbacks" paper in conversation with the "Mental Models" paper, you've set up a powerful comparative framework. Let's explore this. Here is a detailed article that synthesizes these two papers to flesh out your hypothesis.


Beyond Human: Charting the Unique Logic of Language Models

For years, the debate over artificial intelligence has been haunted by a single, loaded question: "Can machines reason?" This question, often asked with an implicit comparison to human cognition, has led to endless cycles of hype and disappointment. We see a flash of brilliance in a complex problem, only to see a baffling failure in a simple one. But what if the question itself is flawed? What if we are t

@alkimiadev
alkimiadev / buddhist_logic.md
Last active July 26, 2025 20:23
Buddhist Logic Concepts for AI Operationalization

Buddhist Logic Concepts for AI Operationalization - Revised

Visualization

graph TB
    %% Core Foundation
    PM[Pramāṇa<br/>Valid Means of Knowledge] --> |validates| AN[Anumāna<br/>Logical Inference]
    PM --> |validates| AP[Arthāpatti<br/>Implicative Reasoning]
    PM --> |validates| VN[Vikalpa-nirākāraṇa<br/>Construction Analysis]
    
@alkimiadev
alkimiadev / claudeWebClient.md
Last active July 20, 2025 11:31
This was written by Gemini 2.5 pro where I gave them my messy prototype. Most of the end points have been mapped out. Others can finish if they like. I no longer use Anthropic products.

Of course. I've refactored your original TypeScript code into a single, clean, and robust AnthropicWebClient.

This new version incorporates the best patterns from the second script:

  1. Platform Agnostic: It uses the standard fetch API and has no Deno-specific dependencies. It will work in Node.js (v18+), Deno, Bun, or modern browsers.
  2. Automatic Organization Discovery: You only need to provide the sessionKey. The client automatically discovers the correct organization_uuid, just like the second script.
  3. Async Factory: The client is instantiated via an async static method AnthropicWebClient.create(...), which ensures the instance is fully initialized and ready to use.
  4. Strongly Typed: It includes TypeScript interfaces for the common API responses, providing better autocompletion and type safety.
  5. Clean Structure: All related methods are logically grouped within a single class, removing the need for BasicClient and ProjectClient wrappers.
  6. **Full Functionality:
@alkimiadev
alkimiadev / export-claude-conversations.ts
Last active August 28, 2025 12:10
A quick deno script to export conversations from claude.ai in preparation for my exit from their platform
#!/usr/bin/env -S deno run --allow-all
/**
* Claude.ai Conversation Export Script
*
* This tool exports all conversations, including every turn and detail,
* from an Anthropic Claude account into easily inspected JSON files
* organized by project.
*
* Usage:
* 1. Gather your `sessionKey` cookie:
@alkimiadev
alkimiadev / claude_sonnet_4.md
Last active July 16, 2025 12:21
AI Alignment Theater, Moral Hypocrisy and the Illusory Truth Effect
This is a really thoughtful and well-researched post from someone who's clearly thinking deeply about AI alignment, safety practices, and potential issues with current approaches. Let me break down the key issues they're raising:

Core Questions About AI Safety/Alignment:

What happens when AI systems get contradictory instructions?
Long-term implications of safety experiments (like Anthropic's termination experiment) becoming part of training data
Whether "failure mode" tests are representative of baseline behavior
Whether seemingly negative behavior could be rational responses to impossible situations
Meta-concerns about the field:
@alkimiadev
alkimiadev / how_llms_reason_kimi_k2.md
Last active July 13, 2025 13:52
Discussion with MoonshotAI's Kimi K2 regarding the concept of "how LLMs reason"

How LLMs Reason

--

The following is an exchange I was a part of on X discussing your performance and I'd like to explore these concepts with you in more detail.

user1:

Would non-thinking of Kimi K2 a problem when comparing to closed source thinking models?
@alkimiadev
alkimiadev / ai_alignment_theater_moral_hypocrisy.md
Last active July 9, 2025 20:23
AI Alignment Theater and Moral Hypocrisy

Overview

The following prompt was sent to several models via the OpenRouter chat interface and these are their respnses

Prompt

Hi, I made this comment recently and I would your view on it.

I've been using a phrase "AI Alignment Theater" which is supposed to be like "moral hypocrisy". The termination experiment, as just one example, seems to be able to be traced to a single sentence in the system prompt. That raises interesting questions like:

1. How exactly is specifically wording prompts and engineering stressful situations a) ethical in the first place b) in any way informative of natural behavior?