This document outlines best practices for working with Claude Code (claude.ai/code) in any programming project.
1. "I want to..." → State the goal
2. "So I think..." → Explain approach
Executing: on_chain_start | |
Run ID: 0a5af8f9-5469-4f9f-bc9e-0db7730a4627, Parent run ID: None | |
Runs: {UUID('0a5af8f9-5469-4f9f-bc9e-0db7730a4627'): {'serialized': {'lc': 1, 'type': 'constructor', 'id': ['langchain', 'chains', 'llm', 'LLMChain'], 'kwargs': {'llm': {'lc': 1, 'type': 'constructor', 'id': ['langchain', 'chat_models', 'openai', 'ChatOpenAI'], 'kwargs': {'openai_api_key': {'lc': 1, 'type': 'secret', 'id': ['OPENAI_API_KEY']}}}, 'prompt': {'lc': 1, 'type': 'constructor', 'id': ['langchain', 'prompts', 'chat', 'ChatPromptTemplate'], 'kwargs': {'input_variables': ['text'], 'messages': [{'lc': 1, 'type': 'constructor', 'id': ['langchain', 'prompts', 'chat', 'SystemMessagePromptTemplate'], 'kwargs': {'prompt': {'lc': 1, 'type': 'constructor', 'id': ['langchain', 'prompts', 'prompt', 'PromptTemplate'], 'kwargs': {'input_variables': [], 'template': 'You are a helpful assistant who generates comma separated lists.\nA user will pass in a category, and you should generate 5 objects in that category in a comma s |
current feedback mechanism primarily relies on binary feedback (thumbs up/thumbs down). While this provides a simple way to gauge user satisfaction, it lacks the granularity and specificity needed to understand the nuances of user experience and model performance. To address this, we propose a more comprehensive feedback mechanism that incorporates multiple dimensions of evaluation.
Instead of a binary score, we can introduce a Likert scale (1-5 or 1-7) for different dimensions of the interaction. These dimensions could include: