Skip to content

Instantly share code, notes, and snippets.

@idvorkin
Created April 13, 2024 14:53
Show Gist options
  • Save idvorkin/a701075a10d98dc41768765bc5b567ca to your computer and use it in GitHub Desktop.
Save idvorkin/a701075a10d98dc41768765bc5b567ca to your computer and use it in GitHub Desktop.

Changes to idvorkin/nlp From [7 days ago] To [2024-04-14]

  • Model: claude-3-opus-20240229
  • Duration: 26 seconds
  • Date: 2024-04-13 07:53:38


improv_bot.py

  • improv_bot.py: +105, -69, ~44
  • Refactored bot state management:
    • Introduced ImprovBotState and BotState classes to manage story state per context
    • Replaced get_story_for_channel, set_story_for_channel, and reset_story_for_channel with usage of g_bot_state
  • Added new /three-things command to play the "Three Things" improv game
    • Implemented prompt_three_things function to generate creative prompts and examples
    • Uses a separate creative_model with higher temperature for more diverse responses
  • Improved progress bar display during long-running operations like /explore and /visualize
    • Replaced custom edit_message_to_append_dots_every_second with reusable draw_progress_bar function
  • Cleaned up redundant code and improved consistency:
    • Removed duplicate setup_secret calls
    • Consolidated sending of messages with send function from discord_helper
    • Removed is_message checks and standardized on using ctx.defer() for interactions
  • Other miscellaneous changes:
    • Added optional start parameter to /once_upon_a_time command to begin story with custom text
    • Removed unused on_mention and MentionListener code

Here is my analysis of the changes for convos/lifecoach.convo.md:

convos/lifecoach.convo.md

  • convos/lifecoach.convo.md: +106, -0, ~0
  • This appears to be a new file that provides a framework for simulating a life coaching conversation between an AI named Tony and a user named Igor
  • The file specifies Tony's personality (modeled after Tony Soprano), how he should respond (concise, conversational responses), and his roles (improv coach, life coach)
  • It provides background information on Igor including his various life roles, physical health goals, affirmations, and daily habits that Tony should be aware of
  • The current state section allows tracking the date, Igor's location, and which of his daily habits he has completed so far
  • Overall, this file acts as a detailed prompt to guide the AI in having a realistic, personalized life coaching dialogue with the user Igor

eval-improv-bot/promptfooconfig.yaml

eval-improv-bot/promptfooconfig.yaml: +22, -0, ~0

This appears to be a new configuration file for promptfoo, a tool for evaluating and comparing LLM prompts. The key components of the config:

  • Specifies two prompts to evaluate, defined in threethings.json
  • Will test the prompts against three LLM providers: GPT-3.5, GPT-4, and Anthropic's Claude
  • Defines test cases with topic variations: "computer programmer interviews during depression" and "brain surgeon"
  • Uses an LLM-based rubric to assert that the generated outputs are funny

Overall, this config sets up an evaluation to compare how different prompts and models handle generating humorous content for the given topic variations. The LLM rubric assertion suggests the goal is testing the ability to produce funny outputs.


life.py

  • life.py: +27, -14, ~18
  • Changed Pydantic field validator to use the new validator decorator instead of the deprecated field_validator
  • Updated parsing of the psychiatrist report to use Pydantic's structured output instead of OpenAI's JSON output parser
  • Modified date parsing to return a datetime object instead of just the date
  • Simplified saving the psychiatrist report to a file by using Pathlib and the .json() method of the Pydantic model
  • Adjusted date formatting when saving the report to use strftime() instead of splitting the string

The changes modernize the code to use newer Pydantic features for validation and structured output, simplify file I/O with Pathlib, and improve date handling. Overall, these changes make the code more readable, maintainable, and less prone to errors.


tony.py

  • tony.py: New file with 153 lines
  • This file defines a command-line application using the Typer library for interacting with the VAPI AI assistant platform
  • Implements commands to:
    • List recent calls from the VAPI API
    • Update the AI assistant model on VAPI
    • Parse call transcripts to extract notes, reminders, journal entries, completed habits, and call summaries using a language model
  • Uses the langchain library for generating structured outputs from call transcripts using prompts
  • Defines data models for representing call information and parsed call summaries
  • Includes utility functions for making HTTP requests to the VAPI API and handling date/time conversions
  • Enables tracing of the language model invocations for debugging purposes

langchain_helper.py

  • langchain_helper.py: +90, -0, ~0
  • Defines several helper functions related to LangChain models and tracing:
    • get_model_name(model): Returns the name of the given LangChain model
    • get_model(openai, google, claude): Returns a LangChain model based on the specified provider (OpenAI, Google, or Anthropic Claude)
    • tracer_project_name(): Generates a project name for tracing based on the calling function, filename, and hostname
    • langsmith_trace_if_requested(trace, the_call): Conditionally traces the execution of the_call based on the trace flag
    • langsmith_trace(the_call): Traces the execution of the_call using Langsmith tracing

commit.py

  • commit.py: +89, -0, ~0
  • This is a new file that generates descriptive and informative commit messages based on the output of git diff --staged
  • Uses the LangChain library to interact with OpenAI and Anthropic's Claude language models
  • Prompts the language models with instructions on how to generate a good commit message, including a concise summary line, reasons for the change, important details, and mentioning any bug fixes
  • Allows specifying whether to enable tracing for debugging purposes
  • Reads the git diff output from stdin and sends it to the language models asynchronously
  • Prints the generated commit messages from each model, prefixed by the model name

eval-improv-bot/threethings.json

  • eval-improv-bot/threethings.json: +18, -0, ~18
  • New file that defines a prompt for an improv game called "3 things"
  • The prompt instructs the AI to act as an improv coach and come up with creative, funny prompts for the "3 things" game
  • Provides an example response if the user mentions "asteroids" as the topic
  • Tells the AI to avoid certain topics like dragons, asteroids, or wizard's pockets unless specifically requested by the user
  • Encourages the AI to generate humorous prompts and examples even for serious topic categories provided by the user

fix.py

  • fix.py: +72, -0, ~0
  • New file added to perform spelling correction on markdown text using an advanced AI model
  • The file defines a Typer command-line application with a fix command
  • The fix command reads markdown text from stdin, corrects spelling errors using the AI model, and outputs the corrected text
  • The AI model is instructed to preserve markdown formatting and meanings while fixing spelling errors
  • The file uses the Langchain library to interact with the AI model (Claude) and measure the elapsed time for the correction process

changes.py

  • changes.py: +27, -4, ~13
  • Added support for selecting different language models (OpenAI, Google, Claude) for generating the diff summary
  • Introduced rate limiting to cap the number of concurrent LLM calls, improving performance and avoiding hitting API limits
  • Enhanced the diff summary output by including the language model used, processing duration, and current date/time
  • Added handling for PDF files to skip them from processing, similar to how Jupyter notebooks are skipped
  • Minor code cleanup and refactoring

eval/commit/promptfooconfig.yaml

  • eval/commit/promptfooconfig.yaml: +16, -0, ~0
  • New configuration file for comparing LLM output of 2 prompts x 2 GPT models across 3 test cases
  • Specifies the prompts to use: diff_commit.json
  • Configures the LLM providers to evaluate:
    • OpenAI GPT-4-0125-preview (chat)
    • Anthropic Claude-3-opus-20240229 (messages)
    • OpenAI GPT-3.5-turbo (chat)
  • Defines a test assertion using an llm-rubric to ensure the diff is described well

eval-improv-bot/README.md

eval-improv-bot/README.md: +10, -0, ~0

  • This is a new file that provides instructions for getting started with the eval-improv-bot
  • It explains that the user needs to set their OPENAI_API_KEY environment variable first
  • The user is then instructed to edit the promptfooconfig.yaml file
  • After that, the user can run promptfoo eval to execute the bot
  • Finally, the results can be viewed by running promptfoo view

eval/commit/diff_commit.json

  • eval/commit/diff_commit.json: +9, -0, ~0
  • New file added to define a JSON object with system and user roles and their associated content
  • The system content provides instructions for generating a descriptive and informative commit message based on the output of git diff --staged
  • The user content contains the actual diff output for a new Python script file commit.py

eval/commit/README.md

eval/commit/README.md: +10, -0, ~0

  • New file added to provide instructions on getting started with the promptfoo tool
  • Explains the necessary setup steps, including setting the OPENAI_API_KEY environment variable and editing the promptfooconfig.yaml file
  • Provides commands to run for evaluating prompts using promptfoo eval and viewing the results using promptfoo view

convos/default.convo.md

convos/default.convo.md: +0, -0, ~2

  • Changed the separator between AI and user messages from --- to ___
  • This appears to be a minor formatting change, likely for improved readability or consistency

The file seems to define a default conversation template between an AI assistant and a user, where the AI is presented as an expert.


docker_bestie_bot

  • docker_bestie_bot: +0, -1, ~1
  • Changes the Discord bot token environment variable from DISCORD_BESTIE_BOT to DISCORD_IMPROV_BOT, likely to switch to a different Discord bot
  • Updates the command to run improv-bot instead of bestie-bot, suggesting a change in the bot being run in the Docker container

openai_wrapper.py

openai_wrapper.py: +0, -13, ~1

  • Removed the tracer_project_name() function, which was likely used for tracing or logging purposes but is no longer needed in the updated code
  • Updated the name parameter in the gpt4 model configuration to use a newer version of the GPT-4 model: "gpt-4-turbo-2024-04-09"

docker_improv_bot

docker_improv_bot: +1, -0, ~0

  • New file added to run the improv bot in a Docker container
  • Sets environment variables for the Discord bot token and OpenAI API key by reading them from a secretBox.json file
  • Runs the improv-bot command within the nlp Docker container using pipenv

setup.py

setup.py: +6, -0, ~3

  • Add new command line tools:
    • langchain_helper - likely a helper tool for working with the Langchain library
    • commit - possibly a tool to help with committing changes to version control
    • fix - purpose unclear, but likely a tool to help fix or correct something
  • Include new dependencies in install_requires for the added command line tools

play_langchain.py

play_langchain.py: +0, -0, ~1

  • Fixed a typo in the model name, changing it from "gemini-1.5.-pro-latest" to "gemini-1.5-pro-latest". The extra dot was removed.

Dockerfile

Dockerfile: +0, -0, ~1

  • Change the destination path when copying the blog.chroma.db file to ensure it is copied to the correct location in the container

Pipfile

  • Add langchain_anthropic dependency to integrate with Anthropic AI models
  • Add httpx dependency for making HTTP requests
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment