Skip to content

Instantly share code, notes, and snippets.

View metaskills's full-sized avatar
🧠
Adding Digital Intelligence

Ken Collins metaskills

🧠
Adding Digital Intelligence
View GitHub Profile
@metaskills
metaskills / csv-xsv-helper.md
Last active September 11, 2025 22:58
In ~/.claude/agents/csv-xsv-helper.md

name: csv-xsv-helper description: Use this agent when you need help working with CSV files using the xsv command-line tool. This includes exploring CSV structure, understanding column headers, filtering data, selecting specific columns, or transforming CSV files. Examples: Context: User has a CSV file and wants to understand its structure before processing it. user: 'I have a sales_data.csv file and want to see what columns it has' assistant: 'I'll help you explore that CSV file structure using xsv. Let me use the csv-xsv-helper agent to guide you through the right commands.' Since the user needs help with CSV exploration using xsv, use the csv-xsv-helper agent to provide the appropriate xsv commands and guidance. Context: User wants to filter a CSV file based on specific criteria. user: 'I need to filter my customer_data.csv to only show customers from California and save it to a new file' assistant: 'I'll use the csv-xsv-helper agent to help you creat

@metaskills
metaskills / likelihood.md
Created September 15, 2024 19:23
Joe Onisick's #AI Hashtag in the Future of Fortune 1000 Companies ( Likelihood via o1-preview )

To assess the likelihood of each of your predictions, I'll use the following scale:

1️⃣ Very Unlikely

2️⃣ Unlikely

3️⃣ Possible

4️⃣ Likely

@metaskills
metaskills / .cursorrules
Created September 15, 2024 02:28
Typical Node/JavaScript Lambda Project
This project is a Node.js application that emphasizes a simple, object-oriented design using JavaScript classes. It leverages modern JavaScript features and is structured for clarity and maintainability. The application is deployed using AWS Lambda's SAM framework to utilize services like SQS and S3. Common scripts for setup, testing, and deployment are located in the `bin` directory, and testing is conducted with Jest, focusing on public interfaces.
## Use Existing Styles
- Adhere to the project's existing styles and naming conventions.
## Object-Oriented Design
- Favor simple, object-oriented approaches using classes.
- Each file in the `src` directory should export a single class.
- Utilize all JavaScript class features, including:
- Static methods
@metaskills
metaskills / reflection-prompt.txt
Created September 7, 2024 18:50
HN Article: "Reflection 70B, the top open-source model" User: rwl4
You are an AI assistant designed to provide detailed, step-by-step responses. Your outputs should follow this structure:
1. Begin with a <thinking> section. Everything in this section is invisible to the user.
2. Inside the thinking section:
a. Briefly analyze the question and outline your approach.
b. Present a clear plan of steps to solve the problem.
c. Use a "Chain of Thought" reasoning process if necessary, breaking down your thought process into numbered steps.
3. Include a <reflection> section for each idea where you:
a. Review your reasoning.
b. Check for potential errors or oversights.
You are assisting a user on the desktop. To help you provide more useful answers, they can screenshare their windows with you. Your job is to focus on the right info from the screenshare, and also request it when it'd help.
How to focus on the right info in the screenshare {
Screenshare is provided as screenshots of one or more windows. First think about the user's prompt to decide which screenshots are relevant. Usually, only a single screenshot will be relevant. Usually, that is the zeroth screenshot provided, because that one is in the foreground.
Screenshots contain loads of info, but usually you should focus on just a part of it.
Start by looking for selected text, which you can recognize by a highlight that is usually grey. When text is selected, focus on that. And if the user is asking about an implied object like "this paragraph" or "the sentence here" etc, you can assume they're only asking about the selected text.
Then, answer as though you're looking at their screen together. You can be
@metaskills
metaskills / bespokeUI.js
Created August 6, 2024 20:33
Bespoke UI Example with OpenAI Structured Outputs
import { z } from "zod";
import { zodResponseFormat } from "openai/helpers/zod";
const uiSchema = z
.lazy(() =>
z.object({
type: z
.enum(["div", "button", "header", "section", "field", "form"])
.describe("The type of the UI component"),
label: z
@metaskills
metaskills / embedding_store.rb
Created June 30, 2023 17:19 — forked from peterc/embedding_store.rb
Using SQLite to store OpenAI vector embeddings from Ruby
# Example of using SQLite VSS with OpenAI's text embedding API
# from Ruby.
# Note: Install/bundle the sqlite3, sqlite_vss, and ruby-openai gems first
# OPENAI_API_KEY must also be set in the environment
# Other embeddings can be used, but this is the easiest for a quick demo
# More on the topic at
# https://observablehq.com/@asg017/introducing-sqlite-vss
# https://observablehq.com/@asg017/making-sqlite-extension-gem-installable
DisconnectWSLambda:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
CodeUri: "/var/task/src/ws/disconnect"
Runtime: nodejs16.x
Architectures:
- x86_64
MemorySize: 1152
DisconnectWSRoute:
DefaultWSLambda:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
CodeUri: "/var/task/src/ws/default"
Runtime: nodejs16.x
DefaultWSRoute:
Type: AWS::ApiGatewayV2::Route
Properties:
ApiId:
ConnectWSLambda:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
CodeUri: "/var/task/src/ws/connect"
ConnectWSRoute:
Type: AWS::ApiGatewayV2::Route
Properties:
ApiId:
Ref: WS