Skip to content

Instantly share code, notes, and snippets.

@simonw
Last active August 13, 2023 08:22
Show Gist options
  • Save simonw/8c047e243f6d57bbdd722b3d8c86423a to your computer and use it in GitHub Desktop.
Save simonw/8c047e243f6d57bbdd722b3d8c86423a to your computer and use it in GitHub Desktop.
"Is there a tool that uses the GitHub API to generate a doc with all release notes from a repo?" https://twitter.com/mholt6/status/1690177417393135616

0.6.1

2023-07-24T15:53:48Z

  • LLM can now be installed directly from Homebrew core: brew install llm. #124
  • Python API documentation now covers System prompts.
  • Fixed incorrect example in the Prompt templates documentation. Thanks, Jorge Cabello. #125

0.6

2023-07-18T21:36:37Z

  • Models hosted on Replicate can now be accessed using the llm-replicate plugin, including the new Llama 2 model from Meta AI. More details here: Accessing Llama 2 from the command-line with the llm-replicate plugin.
  • Model providers that expose an API that is compatible with the OpenAPI API format, including self-hosted model servers such as LocalAI, can now be accessed using additional configuration for the default OpenAI plugin. #106
  • OpenAI models that are not yet supported by LLM can also be configured } using the new extra-openai-models.yaml` configuration file. #107
  • The llm logs command now accepts a -m model_id option to filter logs to a specific model. Aliases can be used here in addition to model IDs. #108
  • Logs now have a SQLite full-text search index against their prompts and responses, and the llm logs -q SEARCH option can be used to return logs that match a search term. #109

0.5

2023-07-12T14:22:17Z

LLM now supports additional language models, thanks to a new plugins mechanism for installing additional models.

Plugins are available for 19 models in addition to the default OpenAI ones:

  • llm-gpt4all adds support for 17 models that can download and run on your own device, including Vicuna, Falcon and wizardLM.
  • llm-mpt30b adds support for the MPT-30B model, a 19GB download.
  • llm-palm adds support for Google's PaLM 2 via the Google API.

A comprehensive tutorial, writing a plugin to support a new model describes how to add new models by building plugins in detail.

New features

  • Python API documentation for using LLM models, including models from plugins, directly from Python. #75
  • Messages are now logged to the database by default - no need to run the llm init-db command any more, which has been removed. Instead, you can toggle this behavior off using llm logs off or turn it on again using llm logs on. The llm logs status command shows the current status of the log database. If logging is turned off, passing --log to the llm prompt command will cause that prompt to be logged anyway. #98
  • New database schema for logged messages, with conversations and responses tables. If you have previously used the old logs table it will continue to exist but will no longer be written to. #91
  • New -o/--option name value syntax for setting options for models, such as temperature. Available options differ for different models. #63
  • llm models list --options command for viewing all available model options. #82
  • llm "prompt" --save template option for saving a prompt directly to a template. #55
  • Prompt templates can now specify default values for parameters. Thanks, Chris Mungall. #57
  • llm openai models command to list all available OpenAI models from their API. #70
  • llm models default MODEL_ID to set a different model as the default to be used when llm is run without the -m/--model option. #31

Smaller improvements

  • llm -s is now a shortcut for llm --system. #69
  • llm -m 4-32k alias for gpt-4-32k.
  • llm install -e directory command for installing a plugin from a local directory.
  • The LLM_USER_PATH environment variable now controls the location of the directory in which LLM stores its data. This replaces the old LLM_KEYS_PATH and LLM_LOG_PATH and LLM_TEMPLATES_PATH variables. #76
  • Documentation covering Utility functions for plugins.
  • Documentation site now uses Plausible for analytics. #79

0.4.1

2023-06-17T21:39:20Z

  • LLM can now be installed using Homebrew: brew install simonw/llm/llm. #50
  • llm is now styled LLM in the documentation. #45
  • Examples in documentation now include a copy button. #43
  • llm templates command no longer has its display disrupted by newlines. #42
  • llm templates command now includes system prompt, if set. #44

0.4

2023-06-17T08:44:06Z

This release includes some backwards-incompatible changes:

  • The -4 option for GPT-4 is now -m 4.
  • The --code option has been removed.
  • The -s option has been removed as streaming is now the default. Use --no-stream to opt out of streaming.

Prompt templates

Prompt templates is a new feature that allows prompts to be saved as templates and re-used with different variables.

Templates can be created using the llm templates edit command:

llm templates edit summarize

Templates are YAML - the following template defines summarization using a system prompt:

system: Summarize this text

The template can then be executed like this:

cat myfile.txt | llm -t summarize

Templates can include both system prompts, regular prompts and indicate the model they should use. They can reference variables such as $input for content piped to the tool, or other variables that are passed using the new -p/--param option.

This example adds a voice parameter:

system: Summarize this text in the voice of $voice

Then to run it (via strip-tags to remove HTML tags from the input):

curl -s 'https://til.simonwillison.net/macos/imovie-slides-and-audio' | \
  strip-tags -m | llm -t summarize -p voice GlaDOS

Example output:

My previous test subject seemed to have learned something new about iMovie. They exported keynote slides as individual images [...] Quite impressive for a human.

The Prompt templates documentation provides more detailed examples.

Continue previous chat

You can now use llm to continue a previous conversation with the OpenAI chat models (gpt-3.5-turbo and gpt-4). This will include your previous prompts and responses in the prompt sent to the API, allowing the model to continue within the same context.

Use the new -c/--continue option to continue from the previous message thread:

llm "Pretend to be a witty gerbil, say hi briefly"

Greetings, dear human! I am a clever gerbil, ready to entertain you with my quick wit and endless energy.

llm "What do you think of snacks?" -c

Oh, how I adore snacks, dear human! Crunchy carrot sticks, sweet apple slices, and chewy yogurt drops are some of my favorite treats. I could nibble on them all day long!

The -c option will continue from the most recent logged message.

To continue a different chat, pass an integer ID to the --chat option. This should be the ID of a previously logged message. You can find these IDs using the llm logs command.

Thanks Amjith Ramanujam for contributing to this feature. #6

New mechanism for storing API keys

API keys for language models such as those by OpenAI can now be saved using the new llm keys family of commands.

To set the default key to be used for the OpenAI APIs, run this:

llm keys set openai

Then paste in your API key.

Keys can also be passed using the new --key command line option - this can be a full key or the alias of a key that has been previously stored.

See link-to-docs for more. #13

New location for the logs.db database

The logs.db database that stores a history of executed prompts no longer lives at ~/.llm/log.db - it can now be found in a location that better fits the host operating system, which can be seen using:

llm logs path

On macOS this is ~/Library/Application Support/io.datasette.llm/logs.db.

To open that database using Datasette, run this:

datasette "$(llm logs path)"

You can upgrade your existing installation by copying your database to the new location like this:

cp ~/.llm/log.db "$(llm logs path)"
rm -rf ~/.llm # To tidy up the now obsolete directory

The database schema has changed, and will be updated automatically the first time you run the command.

That schema is included in the documentation. #35

Other changes

  • New llm logs --truncate option (shortcut -t) which truncates the displayed prompts to make the log output easier to read. #16
  • Documentation now spans multiple pages and lives at https://llm.datasette.io/ #21
  • Default llm chatgpt command has been renamed to llm prompt. #17
  • Removed --code option in favour of new prompt templates mechanism. #24
  • Responses are now streamed by default, if the model supports streaming. The -s/--stream option has been removed. A new --no-stream option can be used to opt-out of streaming. #25
  • The -4/--gpt4 option has been removed in favour of -m 4 or -m gpt4, using a new mechanism that allows models to have additional short names.
  • The new gpt-3.5-turbo-16k model with a 16,000 token context length can now also be accessed using -m chatgpt-16k or -m 3.5-16k. Thanks, Benjamin Kirkbride. #37
  • Improved display of error messages from OpenAI. #15

0.3

2023-05-17T21:10:18Z

  • llm logs command for browsing logs of previously executed completions. #3
  • llm "Python code to output factorial 10" --code option which sets a system prompt designed to encourage code to be output without any additional explanatory text. #5
  • Tool can now accept a prompt piped directly to standard input. #11

0.2

2023-04-01T22:29:04Z

  • If a SQLite database exists in ~/.llm/log.db all prompts and responses are logged to that file. The llm init-db command can be used to create this file. #2

0.1

2023-04-01T22:00:19Z

  • Initial prototype release. #1
@simonw
Copy link
Author

simonw commented Aug 12, 2023

curl -s 'https://api.github.com/repos/simonw/llm/releases' | \
  jq -r '.[] | "## " + .name + "\n\n*" + .created_at + "*\n\n" + .body + "\n"'

@simonw
Copy link
Author

simonw commented Aug 12, 2023

If you have more than 30 releases you'll need to combine multiple pages - my paginate-json tool can help with that:

pipx install paginate-json

Then:

 paginate-json 'https://api.github.com/repos/simonw/sqlite-utils/releases' | \
  jq -r '.[] | "## " + .name + "\n\n*" + .created_at + "*\n\n" + .body + "\n"'

Output here: https://gist.github.com/simonw/f5565b0b67cdd3591e00db67c702f5c5

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment