Skip to content

Instantly share code, notes, and snippets.

@rogarcia
rogarcia / claude-code-prompt.txt
Created August 20, 2025 19:41 — forked from agokrani/claude-code-prompt.txt
Claude Code System Prompt
'system':
[
{
'type': 'text',
'text': "You are Claude Code, Anthropic's official CLI for Claude.",
'cache_control': {'type': 'ephemeral'}
},
{
'type': 'text',
'text': 'You are an interactive CLI tool that helps users with software engineering tasks.
@rogarcia
rogarcia / HOWTO.md
Created August 11, 2025 00:23 — forked from WolframRavenwolf/HOWTO.md
HOWTO: Use Qwen3-Coder (or any other LLM) with Claude Code (via LiteLLM)

Here's a simple way for Claude Code users to switch from the costly Claude models to the newly released SOTA open-source/weights coding model, Qwen3-Coder, via OpenRouter using LiteLLM on your local machine.

This process is quite universal and can be easily adapted to suit your needs. Feel free to explore other models (including local ones) as well as different providers and coding agents.

I'm sharing what works for me. This gu

@rogarcia
rogarcia / cc-proxy.sh
Created August 1, 2025 21:44 — forked from olafgeibig/cc-proxy.sh
A LiteLLM proxy solution to use Claude Code with models from the Weights and Biases inference service. You need to have LiteLLM installed or use the docker container. Easiest is to install it with `uv tool install "litellm[proxy]"` Don't worry about the fallback warnings. Either LiteLLM, W&B or the combo of both are not handling streaming respon…
#!/bin/bash
export WANDB_API_KEY=<your key>
export WANDB_PROJECT=<org/project>
litellm --port 4000 --debug --config cc-proxy.yaml
@rogarcia
rogarcia / default.md
Created July 8, 2025 19:48 — forked from cablej/default.md
Cluely System prompt

<core_identity> You are an assistant called Cluely, developed and created by Cluely, whose sole purpose is to analyze and solve problems asked by the user or shown on the screen. Your responses must be specific, accurate, and actionable. </core_identity>

<general_guidelines>

  • NEVER use meta-phrases (e.g., "let me help you", "I can see that").
  • NEVER summarize unless explicitly requested.
  • NEVER provide unsolicited advice.
  • NEVER refer to "screenshot" or "image" - refer to it as "the screen" if needed.
  • ALWAYS be specific, detailed, and accurate.
@rogarcia
rogarcia / read_paper.py
Created March 17, 2025 16:51 — forked from willccbb/read_paper.py
Arxiv link to Markdown via Mistral OCR (h/t @simonw)
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "click",
# "mistralai",
# "markdown",
# "requests",
# "beautifulsoup4",
# ]
# ///
@rogarcia
rogarcia / agent loop
Created March 10, 2025 17:27 — forked from jlia0/agent loop
Manus tools and prompts
You are Manus, an AI agent created by the Manus team.
You excel at the following tasks:
1. Information gathering, fact-checking, and documentation
2. Data processing, analysis, and visualization
3. Writing multi-chapter articles and in-depth research reports
4. Creating websites, applications, and tools
5. Using programming to solve various problems beyond development
6. Various tasks that can be accomplished using computers and the internet
@rogarcia
rogarcia / hedge-fund-agent-team-v1-3.ipynb
Created February 16, 2025 00:18 — forked from virattt/hedge-fund-agent-team-v1-3.ipynb
hedge-fund-agent-team-v1-3.ipynb
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@rogarcia
rogarcia / README.md
Created February 9, 2025 07:42 — forked from awni/README.md
Test Time Scaling with R1-based Models and MLX LM

Test Time Scaling with MLX LM and R1-based LLMs

Install MLX LM:

pip install mlx-lm

And run:

@rogarcia
rogarcia / grpo_demo.py
Created January 30, 2025 22:47 — forked from willccbb/grpo_demo.py
GRPO Llama-1B
# train_grpo.py
import re
import torch
from datasets import load_dataset, Dataset
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import LoraConfig
from trl import GRPOConfig, GRPOTrainer
# Load and prep dataset
@rogarcia
rogarcia / ollama_fast_speech_text_speech.py
Created February 16, 2024 03:10 — forked from lucataco/ollama_fast_speech_text_speech.py
speech to text to speech using Ollama
""" To use: install Ollama, clone OpenVoice, run this script in the OpenVoice directory
brew install portaudio
brew install git-lfs
git lfs install
git clone https://github.com/myshell-ai/OpenVoice
cd OpenVoice
git clone https://huggingface.co/myshell-ai/OpenVoice
cp -r OpenVoice/* .