Skip to content

Instantly share code, notes, and snippets.

@jjaimealeman
Last active August 26, 2023 20:05
Show Gist options
  • Save jjaimealeman/9c77453dbd090b43018bee5aa918fdb1 to your computer and use it in GitHub Desktop.
Save jjaimealeman/9c77453dbd090b43018bee5aa918fdb1 to your computer and use it in GitHub Desktop.
A shell script for accessing your ChatGPT via API call.

May 02 2023 (date started)

Aug 26 2023 (date modified)

With the explosion of ChatGPT use, I decided to search around for a shell script that I could call from my terminal. After testing out different ones, I settled on this one: https://github.com/0xacx/chatGPT-shell-cli

It's got some image generation capabilities that I do not use, but I left them there.

I have modified and edited the hell out of it for my own personal use.

The original script saved all previous prompts to a file ~/.chatgpt_history. But I found that after a few weeks of use, that file grew quite large. Mine was over 2,000 lines when I made this revision.

It now creates a new daily file at ~/.chatgpt_history with the format Y-%m-%d.md.

I found this method better as I can now quickly "Live Grep" search with neovim 👍

Filename example: ~/.chat_history/2023-05-02.md.

When prompting the first prompt of the day, this is the output:

image1

Afterwards, this is the output:

image2

Each prompt is now also saved along with the response. I found this format to be easier to read. image3

All prompt responses piped out to xclip -selection clipboard, so I can quickly and easily paste elsewhere.

Image generation is saved to ~/.chatgpt_history/images folder with date stamp, and suffixed with an incrementing number count.


Usage:

First get your own API key from https://platform.openai.com/account/api-keys

And add these 2 lines to .zshrc.

export OPENAI_API_KEY=sk-**********************************

export PATH="/home/UserName/path/to/script/chat:$PATH"

Give your script the proper permissions.

chmod +x chat

Requirements:

curl (for API calls)
jq (JSON parser)
xclip (clipboard)
glow (markdown viewer)
#!/bin/bash
GLOBIGNORE="*"
CHAT_INIT_PROMPT="You are ChatGPT, a Large Language Model trained by OpenAI. You will be answering questions from users. You answer as concisely as possible for each response (e.g. don’t be verbose). If you are generating a list, do not have too many items. Keep the number of items short. Before each user prompt you will be given the chat history in Q&A form. Output your answer directly, with no labels in front. Do not start your answers with A or Anwser. You were trained on data up until 2021. Today's date is $(date +%d/%m/%Y)"
SYSTEM_PROMPT="You are ChatGPT, a large language model trained by OpenAI. Answer as concisely as possible. Current date: $(date +%d/%m/%Y). Knowledge cutoff: 9/1/2021."
COMMAND_GENERATION_PROMPT="You are a Command Line Interface expert and your task is to provide functioning shell commands. Return a CLI command and nothing else - do not send it in a code block, quotes, or anything else, just the pure text CONTAINING ONLY THE COMMAND. If possible, return a one-line bash command or chain many commands together. Return ONLY the command ready to run in the terminal. The command should do the following:"
CHATGPT_CYAN_LABEL="\033[36mchatgpt \033[0m"
PROCESSING_LABEL="\n\033[90mProcessing... \033[0m\033[0K\r"
OVERWRITE_PROCESSING_LINE=" \033[0K\r"
if [[ -z "$OPENAI_API_KEY" ]]; then
echo "You need to set your OPENAI_API_KEY to use this script"
echo "You can set it temporarily by running this on your terminal: export OPENAI_API_KEY=YOUR_KEY_HERE"
exit 1
fi
usage() {
cat <<EOF
A simple, lightweight shell script to use OpenAI's Language Models and DALL-E from the terminal without installing Python or Node.js. Open Source and written in 100% Shell (Bash)
https://github.com/0xacx/chatGPT-shell-cli/
By default the script uses the "gpt-3.5-turbo" model. It will upgrade to "gpt-4" when the API is accessible to anyone.
Commands:
image: - To generate images, start a prompt with image: If you are using iTerm, you can view the image directly in the terminal. Otherwise the script will ask to open the image in your browser.
history - To view your chat history
models - To get a list of the models available at OpenAI API
model: - To view all the information on a specific model, start a prompt with model: and the model id as it appears in the list of models. For example: "model:text-babbage:001" will get you all the fields for text-babbage:001 model
command: - To get a command with the specified functionality and run it, just type "command:" and explain what you want to achieve. The script will always ask you if you want to execute the command. i.e.
"command: show me all files in this directory that have more than 150 lines of code"
*If a command modifies your file system or dowloads external files the script will show a warning before executing.
Options:
-i, --init-prompt - Provide initial chat prompt to use in context
--init-prompt-from-file - Provide initial prompt from file
-p, --prompt - Provide prompt instead of starting chat
--prompt-from-file - Provide prompt from file
-t, --temperature - Temperature
--max-tokens - Max number of tokens
-m, --model - Model
-s, --size - Image size. (The sizes that are accepted by the OpenAI API are 256x256, 512x512, 1024x1024)
-c, --chat-context - For models that do not support chat context by default (all models except gpt-3.5-turbo and gpt-4), you can enable chat context, for the model to remember your previous questions and its previous answers. It also makes models aware of todays date and what data it was trained on.
EOF
}
# error handling function
# $1 should be the response body
handle_error() {
if echo "$1" | jq -e '.error' >/dev/null; then
echo -e "Your request to Open AI API failed: \033[0;31m$(echo $1 | jq -r '.error.type')\033[0m"
echo $1 | jq -r '.error.message'
exit 1
fi
}
# request to OpenAI API completions endpoint function
# $1 should be the request prompt
request_to_completions() {
request_prompt="$1"
response=$(curl https://api.openai.com/v1/completions \
-sS \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "'"$MODEL"'",
"prompt": "'"${request_prompt}"'",
"max_tokens": '$MAX_TOKENS',
"temperature": '$TEMPERATURE'
}')
}
# request to OpenAI API image generations endpoint function
# $1 should be the prompt
request_to_image() {
prompt="$1"
image_response=$(curl https://api.openai.com/v1/images/generations \
-sS \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"prompt": "'"${prompt#*image:}"'",
"n": 1,
"size": "'"$SIZE"'"
}')
}
# request to OpenAPI API chat completion endpoint function
# $1 should be the message(s) formatted with role and content
request_to_chat() {
message="$1"
response=$(curl https://api.openai.com/v1/chat/completions \
-sS \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "'"$MODEL"'",
"messages": [
{"role": "system", "content": "'"$SYSTEM_PROMPT"'"},
'"$message"'
],
"max_tokens": '$MAX_TOKENS',
"temperature": '$TEMPERATURE'
}')
}
# build chat context before each request for /completions (all models except
# gpt turbo and gpt 4)
# $1 should be the chat context
# $2 should be the escaped prompt
build_chat_context() {
chat_context="$1"
escaped_prompt="$2"
if [ -z "$chat_context" ]; then
chat_context="$CHAT_INIT_PROMPT\nQ: $escaped_prompt"
else
chat_context="$chat_context\nQ: $escaped_prompt"
fi
request_prompt="${chat_context//$'\n'/\\n}"
}
# maintain chat context function for /completions (all models except
# gpt turbo and gpt 4)
# builds chat context from response,
# keeps chat context length under max token limit
# $1 should be the chat context
# $2 should be the response data (only the text)
maintain_chat_context() {
chat_context="$1"
response_data="$2"
# add response to chat context as answer
chat_context="$chat_context${chat_context:+\n}\nA: ${response_data//$'\n'/\\n}"
# check prompt length, 1 word =~ 1.3 tokens
# reserving 100 tokens for next user prompt
while (($(echo "$chat_context" | wc -c) * 1, 3 > (MAX_TOKENS - 100))); do
# remove first/oldest QnA from prompt
chat_context=$(echo "$chat_context" | sed -n '/Q:/,$p' | tail -n +2)
# add init prompt so it is always on top
chat_context="$CHAT_INIT_PROMPT $chat_context"
done
}
# build user chat message function for /chat/completions (gpt models)
# builds chat message before request,
# $1 should be the chat message
# $2 should be the escaped prompt
build_user_chat_message() {
chat_message="$1"
escaped_prompt="$2"
if [ -z "$chat_message" ]; then
chat_message="{\"role\": \"user\", \"content\": \"$escaped_prompt\"}"
else
chat_message="$chat_message, {\"role\": \"user\", \"content\": \"$escaped_prompt\"}"
fi
request_prompt="$chat_message"
}
# adds the assistant response to the message in (chatml) format
# for /chat/completions (gpt models)
# keeps messages length under max token limit
# $1 should be the chat message
# $2 should be the response data (only the text)
add_assistant_response_to_chat_message() {
chat_message="$1"
local local_response_data="$2"
# replace new line characters from response with space
local_response_data=$(echo "$local_response_data" | tr '\n' ' ')
# add response to chat context as answer
chat_message="$chat_message, {\"role\": \"assistant\", \"content\": \"$local_response_data\"}"
# transform to json array to parse with jq
chat_message_json="[ $chat_message ]"
# check prompt length, 1 word =~ 1.3 tokens
# reserving 100 tokens for next user prompt
while (($(echo "$chat_message" | wc -c) * 1, 3 > (MAX_TOKENS - 100))); do
# remove first/oldest QnA from prompt
chat_message=$(echo "$chat_message_json" | jq -c '.[2:] | .[] | {role, content}')
done
}
# parse command line arguments
while [[ "$#" -gt 0 ]]; do
case $1 in
-i | --init-prompt)
CHAT_INIT_PROMPT="$2"
SYSTEM_PROMPT="$2"
CONTEXT=true
shift
shift
;;
--init-prompt-from-file)
CHAT_INIT_PROMPT=$(cat "$2")
SYSTEM_PROMPT=$(cat "$2")
CONTEXT=true
shift
shift
;;
-p | --prompt)
prompt="$2"
shift
shift
;;
--prompt-from-file)
prompt=$(cat "$2")
shift
shift
;;
-t | --temperature)
TEMPERATURE="$2"
shift
shift
;;
--max-tokens)
MAX_TOKENS="$2"
shift
shift
;;
-m | --model)
MODEL="$2"
shift
shift
;;
-s | --size)
SIZE="$2"
shift
shift
;;
-c | --chat-context)
CONTEXT=true
shift
;;
-h | --help)
usage
exit 0
;;
*)
echo "Unknown parameter: $1"
exit 1
;;
esac
done
# set defaults
TEMPERATURE=${TEMPERATURE:-0.7}
MAX_TOKENS=${MAX_TOKENS:-1024}
MODEL=${MODEL:-gpt-3.5-turbo}
SIZE=${SIZE:-512x512}
CONTEXT=${CONTEXT:-false}
# Create a directory for chat history
chat_history=~/.chat_history/$(date +%Y-%m-%d).md
if [ ! -f "$chat_history" ]; then
touch "$chat_history"
chmod 600 "$chat_history"
echo "Created Chat History: '$chat_history'."
echo "======================="
else
prompt_count=$(grep -o "PROMPT" $chat_history | wc -l)
line_count=$(wc -l <$chat_history)
echo "Chat history exists: '$chat_history'."
echo "With $prompt_count prompts and $line_count lines."
echo "============================="
fi
# Create a directory for chat history images
chat_history_images=~/.chat_history/images
image_count="-type f \( -iname \"*.png\" \) | wc -l"
if [ ! -d "$chat_history_images" ]; then
mkdir -p "$chat_history_images"
chmod 755 "$chat_history_images"
echo "Created Chat History Images Directory: '$chat_history_images'."
echo "=============================================="
else
echo "Chat history images directory exists: '$chat_history_images'."
echo "There are $(find $chat_history_images -type f \( -iname "*.png" \) | wc -l) images."
echo "=============================================="
fi
running=true
# check input source and determine run mode
# prompt from argument, run on pipe mode (run once, no chat)
if [ -n "$prompt" ]; then
pipe_mode_prompt=${prompt}
# if input file_descriptor is a terminal, run on chat mode
elif [ -t 0 ]; then
echo -e "Welcome to chatgpt. You can quit with '\033[36mexit\033[0m' or '\033[36mq\033[0m'."
# prompt from pipe or redirected stdin, run on pipe mode
else
pipe_mode_prompt+=$(cat -)
fi
while $running; do
if [ -z "$pipe_mode_prompt" ]; then
echo -e "\nEnter a prompt:"
read -e prompt
if [ "$prompt" != "exit" ] && [ "$prompt" != "q" ]; then
echo -ne $PROCESSING_LABEL
fi
else
# set vars for pipe mode
prompt=${pipe_mode_prompt}
running=false
CHATGPT_CYAN_LABEL=""
fi
if [ "$prompt" == "exit" ] || [ "$prompt" == "q" ]; then
running=false
elif [[ "$prompt" =~ ^image: ]]; then
request_to_image "$prompt"
handle_error "$image_response"
image_url=$(echo $image_response | jq -r '.data[0].url')
echo -e "$OVERWRITE_PROCESSING_LINE"
echo -e "${CHATGPT_CYAN_LABEL}Your image was created. \n\n"
echo -e "Link: ${image_url}\n"
if [[ "$TERM_PROGRAM" == "iTerm.app" ]]; then
curl -sS $image_url -o temp_image.png
imgcat temp_image.png
rm temp_image.png
elif [[ "$TERM" == "xterm-kitty" ]]; then
# Get image and name as temp_image.png
curl -sS $image_url -o temp_image.png
# Get the number of images in the chat history images directory
IMAGE_COUNT=$(find $chat_history_images -type f \( -iname "*.png" \) | wc -l)
echo -e "Image count is: $IMAGE_COUNT\n"
# Move temp_image.png to chat history images directory with date and image count as part of the file name.
# EXAMPLE: 2023-08-26_10.png
mv temp_image.png $chat_history_images/$(date +%Y-%m-%d)_$(find $chat_history_images -type f \( -iname "*.png" \) | wc -l).png
# Echo path to image.
echo -e "Local path is: $chat_history_images/$(date +%Y-%m-%d)_$(find $chat_history_images -type f \( -iname "*.png" \) | wc -l).png\n"
# Display image in kitty terminal.
kitty +kitten icat $chat_history_images/$(date +%Y-%m-%d)_$(find $chat_history_images -type f \( -iname "*.png" \) | wc -l).png
# Copy image to clipboard using xclip.
xclip -selection clipboard -t image/png -i < $chat_history_images/$(date +%Y-%m-%d)_$(find $chat_history_images -type f \( -iname "*.png" \) | wc -l).png
echo -e "\033[33mImage\033[0m \033[33mon\033[0m \033[33mclipboard\033[0m 👍"
else
echo "Would you like to open it? (Yes/No)"
read -e answer
if [ "$answer" == "Yes" ] || [ "$answer" == "yes" ] || [ "$answer" == "y" ] || [ "$answer" == "Y" ] || [ "$answer" == "ok" ]; then
open "${image_url}"
fi
fi
elif [[ "$prompt" == "history" ]]; then
# echo -e "\n$(cat ~/.chatgpt_history)"
echo -e "\n$(cat $chat_history)"
elif [[ "$prompt" == "models" ]]; then
models_response=$(curl https://api.openai.com/v1/models \
-sS \
-H "Authorization: Bearer $OPENAI_API_KEY")
handle_error "$models_response"
models_data=$(echo $models_response | jq -r -C '.data[] | {id, owned_by, created}')
echo -e "$OVERWRITE_PROCESSING_LINE"
echo -e "${CHATGPT_CYAN_LABEL}This is a list of models currently available at OpenAI API:\n ${models_data}"
elif [[ "$prompt" =~ ^model: ]]; then
models_response=$(curl https://api.openai.com/v1/models \
-sS \
-H "Authorization: Bearer $OPENAI_API_KEY")
handle_error "$models_response"
model_data=$(echo $models_response | jq -r -C '.data[] | select(.id=="'"${prompt#*model:}"'")')
echo -e "$OVERWRITE_PROCESSING_LINE"
echo -e "${CHATGPT_CYAN_LABEL}Complete details for model: ${prompt#*model:}\n ${model_data}"
elif [[ "$prompt" =~ ^command: ]]; then
# escape quotation marks
escaped_prompt=$(echo "$prompt" | sed 's/"/\\"/g')
# escape new lines
if [[ "$prompt" =~ ^command: ]]; then
escaped_prompt=${prompt#command:}
request_prompt=$COMMAND_GENERATION_PROMPT${escaped_prompt//$'\n'/' '}
fi
build_user_chat_message "$chat_message" "$request_prompt"
request_to_chat "$request_prompt"
handle_error "$response"
response_data=$(echo $response | jq -r '.choices[].message.content')
if [[ "$prompt" =~ ^command: ]]; then
echo -e "$OVERWRITE_PROCESSING_LINE"
echo -e "${CHATGPT_CYAN_LABEL} ${response_data}" | fold -s -w $COLUMNS
dangerous_commands=("rm" ">" "mv" "mkfs" ":(){:|:&};" "dd" "chmod" "wget" "curl")
for dangerous_command in "${dangerous_commands[@]}"; do
if [[ "$response_data" == *"$dangerous_command"* ]]; then
echo "Warning! This command can change your file system or download external scripts & data. Please do not execute code that you don't understand completely."
fi
done
echo "Would you like to execute it? (Yes/No)"
read run_answer
if [ "$run_answer" == "Yes" ] || [ "$run_answer" == "yes" ] || [ "$run_answer" == "y" ] || [ "$run_answer" == "Y" ]; then
echo -e "\nExecuting command: $response_data\n"
eval $response_data
fi
fi
escaped_response_data=$(echo "$response_data" | sed 's/"/\\"/g')
add_assistant_response_to_chat_message "$chat_message" "$escaped_response_data"
# timestamp=$(date +"%d/%m/%Y %H:%M")
timestamp=$(date +"%Y/%m/%d %H:%M")
# echo -e "$timestamp \n$prompt \n$response_data \n" >>~/.chatgpt_history
echo -e "================\n$timestamp \n----------------\nPROMPT:\n$prompt \n\nRESPONSE:\n$response_data \n" >>$chat_history
echo -e "$response_data" | xclip -selection clipboard
echo -e "\033[33mResponse\033[0m \033[33mon\033[0m \033[33mclipboard\033[0m 👍"
elif [[ "$MODEL" =~ ^gpt- ]]; then
# escape quotation marks
escaped_prompt=$(echo "$prompt" | sed 's/"/\\"/g')
# escape new lines
request_prompt=${escaped_prompt//$'\n'/' '}
build_user_chat_message "$chat_message" "$request_prompt"
request_to_chat "$request_prompt"
handle_error "$response"
response_data=$(echo "$response" | jq -r '.choices[].message.content')
echo -e "$OVERWRITE_PROCESSING_LINE"
# if glow installed, print parsed markdown
if command -v glow &>/dev/null; then
echo -e "${CHATGPT_CYAN_LABEL}"
echo "${response_data}" | glow -
#echo -e "${formatted_text}"
else
echo -e "${CHATGPT_CYAN_LABEL}${response_data}" | fold -s -w $COLUMNS
fi
escaped_response_data=$(echo "$response_data" | sed 's/"/\\"/g')
add_assistant_response_to_chat_message "$chat_message" "$escaped_response_data"
# timestamp=$(date +"%d/%m/%Y %H:%M")
timestamp=$(date +"%Y/%m/%d %H:%M")
# echo -e "$timestamp \n$prompt \n$response_data \n" >>~/.chatgpt_history
echo -e "================\n$timestamp \n----------------\nPROMPT:\n$prompt \n\nRESPONSE:\n$response_data \n" >>$chat_history
echo -e "$response_data" | xclip -selection clipboard
echo -e "\033[33mResponse\033[0m \033[33mon\033[0m \033[33mclipboard\033[0m 👍"
else
# escape quotation marks
escaped_prompt=$(echo "$prompt" | sed 's/"/\\"/g')
# escape new lines
request_prompt=${escaped_prompt//$'\n'/' '}
if [ "$CONTEXT" = true ]; then
build_chat_context "$chat_context" "$escaped_prompt"
fi
request_to_completions "$request_prompt"
handle_error "$response"
response_data=$(echo "$response" | jq -r '.choices[].text')
echo -e "$OVERWRITE_PROCESSING_LINE"
# if glow installed, print parsed markdown
if command -v glow &>/dev/null; then
echo -e "${CHATGPT_CYAN_LABEL}"
echo "${response_data}" | glow -
else
# else remove empty lines and print
formatted_text=$(echo "${response_data}" | sed '1,2d; s/^A://g')
echo -e "${CHATGPT_CYAN_LABEL}${formatted_text}" | fold -s -w $COLUMNS
fi
if [ "$CONTEXT" = true ]; then
escaped_response_data=$(echo "$response_data" | sed 's/"/\\"/g')
maintain_chat_context "$chat_context" "$escaped_response_data"
fi
# timestamp=$(date +"%d/%m/%Y %H:%M")
timestamp=$(date +"%Y/%m/%d %H:%M")
# echo -e "$timestamp \n$prompt \n$response_data \n" >>~/.chatgpt_history
echo -e "================\n$timestamp \n----------------\nPROMPT:\n$prompt \n\nRESPONSE:\n$response_data \n" >>$chat_history
echo -e "$response_data" | xclip -selection clipboard
echo -e "\033[33mResponse\033[0m \033[33mon\033[0m \033[33mclipboard\033[0m 👍"
fi
done
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment