Skip to content

Instantly share code, notes, and snippets.

@obeone
Last active May 14, 2024 01:57
Show Gist options
  • Save obeone/9313811fd61a7cbb843e0001a4434c58 to your computer and use it in GitHub Desktop.
Save obeone/9313811fd61a7cbb843e0001a4434c58 to your computer and use it in GitHub Desktop.
Ollama ZSH Completion

ZSH Completion for Ollama (ollama command)

This ZSH completion script enhances your command line interface by providing auto-completion for the ollama command, part of the Ollama suite. It supports all subcommands and their options, significantly improving efficiency and reducing the need for memorizing syntax.

Installation

  1. Download the Completion Script
    You can download the _ollama script directly using the following command:

    mkdir -p $HOME/.zsh_completions
    wget -O $HOME/.zsh_completions/_ollama https://gist.githubusercontent.com/obeone/9313811fd61a7cbb843e0001a4434c58/raw/_ollama.zsh
  2. Add to ZSH Fpath
    Ensure that the script is in one of the directories listed in your fpath. You can view your current fpath with:

    echo $fpath

    To add a new directory to your fpath, include the following line in your .zshrc:

    fpath=($HOME/.zsh_completions $fpath)
  3. Source the Completion Script

    To enable the completion script, restart your ZSH session or source your .zshrc file if you haven't done so since updating it source ~/.zshrc

    If you use Oh My Zsh, you can reload it with omz reload.

    ZSH should automatically source all completion scripts in your fpath when starting a new shell session. Ensure the script is named correctly as _ollama.

Usage

Simply type ollama followed by a space and a -, then press tab to see available subcommands and options. For example:

ollama [tab] # Lists all subcommands
ollama serve --[tab] # Will propose all available options for the `serve` command

How It Was Created

The completion script for ollama was primarily generated using a custom GPT model I created named ZSH Expert (you need ChatGPT Plus to use it).

This model specializes in creating ZSH completion scripts by analyzing the --help outputs of various commands. Here's the process:

  1. Generating Completion Logic:
    I provided the --help outputs of the ollama command to ZSH Expert, which then generated the necessary ZSH completion logic, including handling of subcommands and options.

  2. Validation and Iteration:
    The script was tested iteratively to catch any issues with command completion, especially around new or complex command options. This was the first completion script I created with ZSH Expert while developing it, and the discussion was really laborious! Feel free to report any issues to help improve the script further.

You can view the entire creation process and the discussions that led to the final script in this chat history. It was the first completion script I made during the development of ZSH Expert. So, naturally, the discussion was very long and tedious.

Contributors

  • ZSH Expert (ChatGPT Custom GPT, did almost all the job)
  • obeone (Guiding and testing)

License

Distributed under the MIT License.

#compdef ollama
# Purpose:
# This script file `_ollama` should be placed in your fpath to provide zsh completions functionality for ollama commands.
# It utilizes zsh's native completion system by defining specific completion behaviors tailored to ollama commands.
# Installation:
# 1. Check your current fpath by executing: `echo $fpath` in your zsh shell.
# 2. To introduce a new directory to fpath, edit your .zshrc file:
# Example: `fpath=($HOME/.zsh-completions $fpath)`
# 3. Store this script file in the directory you have added to your fpath.
# 4. For a system-wide installation on Linux:
# Download and deploy this script with the following command:
# sudo wget -O /usr/share/zsh/site-functions/_ollama https://gist.githubusercontent.com/obeone/9313811fd61a7cbb843e0001a4434c58/raw/_ollama.zsh
# Contributions:
# Principal contributions by:
# - ChatGPT [ZSH Expert](https://chatgpt.com/g/g-XczdbjXSW-zsh-expert) as the primary creator.
# - Guidance and revisions by [obeone](https://github.com/obeone).
# Note:
# - This configuration file presupposes the utilization of Zsh as your primary shell environment.
# - It is crucial to restart your zsh session subsequent to alterations made to your fpath to ensure the updates are effectively recognized.
# Function to fetch and return model names from 'ollama list'
_fetch_ollama_models() {
local -a models
local output="$(ollama list 2>/dev/null | sed 's/:/\\:/g')" # Escape semicolons for zsh
if [[ -z "$output" ]]; then
_message "no models available or 'ollama list' failed"
return 1
fi
models=("${(@f)$(echo "$output" | awk 'NR>1 {print $1}')}")
if [[ ${#models} -eq 0 ]]; then
_message "no models found"
return 1
fi
_describe 'model names' models
}
# Main completion function
_ollama() {
local -a commands
_arguments -C \
'1: :->command' \
'*:: :->args'
case $state in
command)
commands=(
'serve:Start ollama'
'create:Create a model from a Modelfile'
'show:Show information for a model'
'run:Run a model'
'pull:Pull a model from a registry'
'push:Push a model to a registry'
'list:List models'
'cp:Copy a model'
'rm:Remove a model'
'help:Help about any command'
)
_describe -t commands 'ollama command' commands
;;
args)
case $words[1] in
serve)
_arguments \
'--host[Specify the host and port]:host and port:' \
'--origins[Set allowed origins]:origins:' \
'--models[Path to the models directory]:path:_directories' \
'--keep-alive[Duration to keep models in memory]:duration:'
;;
create)
_arguments \
'-f+[Specify the file name]:file:_files'
;;
show)
_arguments \
'--license[Show license of a model]' \
'--modelfile[Show Modelfile of a model]' \
'--parameters[Show parameters of a model]' \
'--system[Show system message of a model]' \
'--template[Show template of a model]' \
'*::model:->model'
if [[ $state == model ]]; then
_fetch_ollama_models
fi
;;
run)
_arguments \
'--format[Specify the response format]:format:' \
'--insecure[Use an insecure registry]' \
'--nowordwrap[Disable word wrap]' \
'--verbose[Show verbose output]' \
'*::model and prompt:->model_and_prompt'
if [[ $state == model_and_prompt ]]; then
_fetch_ollama_models
_message "enter prompt"
fi
;;
pull|push)
_arguments \
'--insecure[Use an insecure registry]' \
'*::model:->model'
if [[ $state == model ]]; then
_fetch_ollama_models
fi
;;
list)
_message "no additional arguments for list"
;;
cp)
_arguments \
'1:source model:_fetch_ollama_models' \
'2:target model:_fetch_ollama_models'
;;
rm)
_arguments \
'*::models:->models'
if [[ $state == models ]]; then
_fetch_ollama_models
fi
;;
help)
_message "no additional arguments for help"
;;
esac
;;
esac
}
_ollama
@ncoquelet
Copy link

Nice, it works like a charm!
Any plan to publish it as official plugin ?
I can help if needed
Regards

@obeone
Copy link
Author

obeone commented May 14, 2024

Hi @ncoquelet !

I would be proud if it can serve other people !

For the story, it was the first completion test with the custom GPT ZSH Expert I created, so it was very long and tedious to proceed 🤣
If you're interested in, here is the chat log (and sorry, in French).

(This of course allowed me to improve the prompt, documentation etc of the GPT and therefore to be able to create completions much more quickly.)

You can find other examples with sgpt and lms.

I'll add a README.md like others. And maybe especially gather them all in one repository, if I keep going without publishing others 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment