Skip to content

Instantly share code, notes, and snippets.

@thesephist
Created November 2, 2022 02:44
Show Gist options
  • Star 10 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save thesephist/28786aa80ac6e26241116c5ed2be97ca to your computer and use it in GitHub Desktop.
Save thesephist/28786aa80ac6e26241116c5ed2be97ca to your computer and use it in GitHub Desktop.
"Open sourcing" the little CLI from https://twitter.com/thesephist/status/1587593832002072576

ask CLI

ask is a little CLI I made to interact with OpenAI's GPT-3 (text-davinci-002) from my shell/terminal. The instruction fine-tuning on that model makes it particularly ideal for just asking questions and making requests.

With this CLI, I can do something like:

$ ask 'Write a haskell function that reverses a string'
reverseString :: String -> String
reverseString = foldl (\acc x -> x : acc) []

I can also pipe input into ask:

$ echo 'Write a haskell function that reverses a string' | ask --stdin
reverseString :: String -> String
reverseString = foldl (\acc x -> x : acc) []

There's a small set of flags I can use to control the model's output sampling quickly:

$ ask --help
Ask: ask OpenAI GPT-3

Usage
	ask [prompt] [options]
	your-program | ask --stdin [options]

CLI options
	--[h]elp        Show this help message
	--[v]ersion     Print version information and exit

Text generation options
	--[n]           Number of completions to generate
	--[t]emperature Temperature for output sampling, default 1.0
	--top-p         top_p value for nucleus sampling, default 0.9
	--max           Max number of tokens to generate, default 256
{
println: println
default: default
map: map
stdin: stdin
slice: slice
append: append
} := import('std')
{
trim: trim
join: join
endsWith?: endsWith?
} := import('str')
fmt := import('fmt')
json := import('json')
debug := import('debug')
cli := import('cli')
Version := '1.0'
APIKey := 'sk-XXXXXXXXXXX' // your OpenAI Key
Cli := with cli.parseArgv() if {
args().1 |> default('') |> endsWith?('main.oak') -> args()
_ -> ['oak', 'ask.oak'] |> append(args() |> slice(1))
}
if Cli.verb != ? -> Cli.args := [Cli.verb] |> append(Cli.args)
if Cli.opts.version |> default(Cli.opts.v) != ? -> {
fmt.printf('Ask v{{0}}', Version)
exit(0)
}
if Cli.opts.help |> default(Cli.opts.h) != ? -> {
println('Ask: ask OpenAI GPT-3
Usage
ask [prompt] [options]
your-program | ask --stdin [options]
CLI options
--[h]elp Show this help message
--[v]ersion Print version information and exit
Text generation options
--[n] Number of completions to generate
--[t]emperature Temperature for output sampling, default 1.0
--top-p top_p value for nucleus sampling, default 0.9
--max Max number of tokens to generate, default 256
')
exit(0)
}
params := {
model: 'text-davinci-002'
prompt: if Cli.opts.stdin {
? -> Cli.args |> join(' ')
_ -> stdin()
} |> trim()
top_p: float(Cli.opts.top_p) |> default(0.9)
max_tokens: int(Cli.opts.max) |> default(256)
temperature: float(Cli.opts.temperature) |> default(float(Cli.opts.t)) |> default(1.0)
n: int(Cli.opts.n) |> default(1)
}
if Cli.opts.stop != ? -> params.stop := [Cli.opts.stop]
if Cli.opts.d |> default(Cli.opts.debug) != ? -> {
debug.println(params)
exit(0)
}
if params.prompt != '' -> with req({
method: 'POST'
url: fmt.format('https://api.openai.com/v1/completions')
headers: {
'Authorization': 'Bearer ' << APIKey
'Content-Type': 'application/json'
}
body: json.serialize(params)
}) fn(evt) if evt.type {
:error -> println('error:', evt.error)
_ -> if params.n {
1 -> json.parse(evt.resp.body).choices.(0).text |>
trim() |>
println()
_ -> json.parse(evt.resp.body).choices |>
map(:text) |>
map(fn(s) trim(s)) |>
join('\n---\n') |>
println()
}
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment