See comment
-
-
Save simonw/d3c07969a522226067b8fe099007fe4a to your computer and use it in GitHub Desktop.
Hi folks! I love this idea as well but can't seem to get it to work. Not very familiar with Automator.
I get a blank dialogue when I try to run the Quick Action:
This is what the Quick Action definition looks like:
escaped_args=""
for arg in "$@"; do
# Escape single quotes (replace ' with '\'')
escaped_arg=$(printf '%s\n' "$arg" | sed "s/'/'\\\\''/g")
# Append the escaped argument to the string, surrounded by single quotes
escaped_args="$escaped_args '$escaped_arg'"
done
# Use the escaped arguments in your command
result=$(/Users/joshuaziggas/Library/Python/3.9/bin/llm -m gpt-4 $escaped_args)
# Handle the result as before
escapedResult=$(echo "$result" | sed 's/\\/\\\\/g' | sed 's/"/\\"/g' | awk '{printf "%s\\n", $0}' ORS='')
osascript -e "display dialog \"$escapedResult\""
I ran brew install llm
and llm keys set openai
and then confirmed my key works with llm "Ten fun names for a pet pelican"
Any ideas?
(Apple M1 Max on Ventura 13.5 22G74)
um… if i have llm installed in a conda environment, how would i need to adapt this?
@simonw, maybe consider supporting this 'officially' (= make it easier to set up)? i am aware this gets annoying with different setups (like mine) but this really sounds insanely useful! (by the way: huge fan of your blog!)
@jziggas I experienced the exact same output when I blindly copied the code with ~eliya
in the path. Fixing that to the correct path fixed the issue for me. Are you sure /Users/joshuaziggas/Library/Python/3.9/bin/llm
is the correct path for your llm
binary?
@jziggas - quick action won't work, because there is no argument/context that gets passed along as the prompt. I would save the automator action and try to invoke it from the services context menu in some other app after highlighting some text. Does that work?
Also, to @mbafford's point, make sure the path to the binary is right. You can find it by typing which llm
into the terminal.
@maschinenzeitmaschine have you tried hardcoding the path to the llm
binary in the shell script part? Not too familiar with Conda, but I think that would work with something like a python venv (for example). I imagine it would be similar?
@mbafford Ugh good catch - since I installed via homebrew that path was /opt/homebrew/bin/llm
thanks :)
@eliyastein thanks, that did the trick!
for the record:
i got it to run with llm running in conda, below is my script (conda base env, miniconda installed through brew).
note: this does NOT work if llm is set to use a custom directory location, use the default one!
LLM_COMMAND="/System/Volumes/Data/opt/homebrew/Caskroom/miniconda/base/bin/llm"
escaped_args=""
for arg in "$@"; do
# Escape single quotes (replace ' with '\'')
escaped_arg=$(printf '%s\n' "$arg" | sed "s/'/'\\\\''/g")
# Append the escaped argument to the string, surrounded by single quotes
escaped_args="$escaped_args '$escaped_arg'"
done
# Use the escaped arguments in your command
result=$($LLM_COMMAND -m claude-3-opus $escaped_args)
# Handle the result as before
escapedResult=$(echo "$result" | sed 's/\\/\\\\/g' | sed 's/"/\\"/g' | awk '{printf "%s\\n", $0}' ORS='')
osascript -e "display dialog \"$escapedResult\""
p.s. i'm well aware this is out of scope and needs quite some more than a shell script, but wouldn't it be insanely cool if this would not only open a text window with an answer, but an interactive shell? like a chat interface, so i could do follow-up questions? i really love the idea to have an llm available anywhere in the finder and not have to open an app or browser or terminal and copy/paste every time…
@maschinenzeitmaschine - the script can be modified to pop open a terminal that starts a chat with the highlighted text as the system prompt. Something like this:
escaped_args=""
for arg in "$@"; do
escaped_arg=$(printf '%s\n' "$arg" | sed "s/'/'\\\\''/g")
escaped_args="$escaped_args '$escaped_arg'"
done
osascript -e "tell application \"Terminal\" to do script \"/usr/local/bin/llm chat -m gpt-4 -s ${escaped_args}\""
While this is great I wish I could use this in any app. Currently I cannot access this automator workflow from within slack :/
@devtanna - It works from Slack for me. If you highlight some text in slack, you can access the workflow from the menu Slack -> Services -> LLM, or you can invoke it with a hotkey if you've set one under keyboard shortcuts.
oh cool, thanks @eliyastein ! works now :)
Hey @simonw - You don't need the "get specified text". My setup looks like this:
You might need the full path to the
llm
binary. Here's my full script:At that point, once you save, you should have it available under the services menu:
You can then create a hotkey for it under keyboard shortcuts for ease of use under
services
: