Skip to content

Instantly share code, notes, and snippets.

@aidando73
Last active January 4, 2025 04:01
Show Gist options
  • Save aidando73/943b5f02d35571eb783f3cca5afa6e59 to your computer and use it in GitHub Desktop.
Save aidando73/943b5f02d35571eb783f3cca5afa6e59 to your computer and use it in GitHub Desktop.
comments.md
@ashwinb
Copy link

ashwinb commented Dec 11, 2024

Thanks for all your contributions @aidando73 -- we want to keep supporting completions at least for now because we believe having raw access to a model is as important. Unlike other providers, Llamas are open source and people play with them and iterate in a variety of ways. The kinds of manipulations we do with a chat_completion endpoint internally may not be what users intend sometimes. Sometimes, they just want a carefully formatted prompt to hit the model directly.

On that theme, I think it would be great if Groq could build completions endpoints on their end too. But until that time, NotImplementedError() would have to do.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment