Recursal provides an OpenAI compatible interface to their hosted RWKV models.
These can be easily configured to be accessible and interrogable through llm.
Specifically, suppose the Recursal dashboard is suggesting you use the following in your python code
from openai import OpenAI
client = OpenAI(
base_url="https://router.recursal.com/hVtcCcQK8xF6CPkb4qT2W",
api_key="API_KEY_REDACTED",
)
response = client.chat.completions.create(
model='gpt-3.5-turbo',
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
)
Suppose further, you have llm installed.
- give
llm
your recursal key viallm keys set recursal
- create a file titled
extra-openai-models.yaml
in the directory given bydirname "$(llm logs path)"
and insert the following content
- model_id: recursal-hvtc # How you reference this model from the CLI (e.g. llm -m recursal-base "Your prompt")
aliases: ["my-recursal-fine-tune"] # Alternate CLI references
model_name: gpt-3.5-turbo # passed to API. In recursal case, _teacher_/pass through model
api_base: "https://router.recursal.com/hVtcCcQK8xF6CPkb4qT2W" # URL of a recursal project
api_key_name: recursal # set in the previous step
You will now be able to interact with recursal on your cli, ala
llm -m my-recursal-fine-tune "What can you tell me about president Abraham Lincoln?"
Note that at the time of writing (March 13th), Recursal's 7B Eagle model is currently available via openrouter see this blog post.
Install the llm-openrouter
plugin (e.g. via pip), make an openrouter API key available to your LLM instance via llm keys set openrouter
, and then run
llm -m openrouter/recursal/eagle-7b "Your prompt goes here"