Skip to content

Instantly share code, notes, and snippets.

@kfsone
Created November 16, 2023 22:28
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save kfsone/380c0ac738a059f213bfeea1b45dabe4 to your computer and use it in GitHub Desktop.
Save kfsone/380c0ac738a059f213bfeea1b45dabe4 to your computer and use it in GitHub Desktop.
(venv) root@b7e76e6d82d8:/workspace# echo $OPENAI_API_BASE
http://localhost:11434/
(venv) root@b7e76e6d82d8:/workspace# litellm --test --port ${LITELLM_PORT}
LiteLLM: Making a test ChatCompletions request to your proxy
An error occurred: OpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided: none. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
Debug this by setting `--debug`, e.g. `litellm --model gpt-3.5-turbo --debug`
INFO: 127.0.0.1:35604 - "POST /chat/completions HTTP/1.1" 200 OK
LiteLLM: response from proxy ChatCompletion(id=None, choices=None, created=None, model=None, object=None, system_fingerprint=None, usage=None, error='OpenAIException - Error code: 401 - {\'error\': {\'message\': \'Incorrect API key provided: none. You can find your API key at https://platform.openai.com/account/api-keys.\', \'type\': \'invalid_request_error\', \'param\': None, \'code\': \'invalid_api_key\'}}\n\nTraceback (most recent call last):\n File "/app/venv/lib/python3.10/site-packages/litellm/main.py", line 556, in completion\n raise e\n File "/app/venv/lib/python3.10/site-packages/litellm/main.py", line 535, in completion\n response = openai_chat_completions.completion(\n File "/app/venv/lib/python3.10/site-packages/litellm/llms/openai.py", line 267, in completion\n raise e\n File "/app/venv/lib/python3.10/site-packages/litellm/llms/openai.py", line 262, in completion\n raise e\n File "/app/venv/lib/python3.10/site-packages/litellm/llms/openai.py", line 230, in completion\n response = openai.chat.completions.create(**data)\n File "/app/venv/lib/python3.10/site-packages/openai/_utils/_utils.py", line 299, in wrapper\n return func(*args, **kwargs)\n File "/app/venv/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 598, in create\n return self._post(\n File "/app/venv/lib/python3.10/site-packages/openai/_base_client.py", line 1055, in post\n return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))\n File "/app/venv/lib/python3.10/site-packages/openai/_base_client.py", line 834, in request\n return self._request(\n File "/app/venv/lib/python3.10/site-packages/openai/_base_client.py", line 877, in _request\n raise self._make_status_error_from_response(err.response) from None\nopenai.AuthenticationError: Error code: 401 - {\'error\': {\'message\': \'Incorrect API key provided: none. You can find your API key at https://platform.openai.com/account/api-keys.\', \'type\': \'invalid_request_error\', \'param\': None, \'code\': \'invalid_api_key\'}}\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/app/venv/lib/python3.10/site-packages/litellm/proxy/proxy_server.py", line 566, in chat_completion\n return litellm_completion(\n File "/app/venv/lib/python3.10/site-packages/litellm/proxy/proxy_server.py", line 470, in litellm_completion\n raise e\n File "/app/venv/lib/python3.10/site-packages/litellm/proxy/proxy_server.py", line 466, in litellm_completion\n response = litellm.completion(*args, **kwargs)\n File "/app/venv/lib/python3.10/site-packages/litellm/utils.py", line 1238, in wrapper\n raise e\n File "/app/venv/lib/python3.10/site-packages/litellm/utils.py", line 1172, in wrapper\n result = original_function(*args, **kwargs)\n File "/app/venv/lib/python3.10/site-packages/litellm/main.py", line 1372, in completion\n raise exception_type(\n File "/app/venv/lib/python3.10/site-packages/litellm/utils.py", line 4165, in exception_type\n raise e\n File "/app/venv/lib/python3.10/site-packages/litellm/utils.py", line 3310, in exception_type\n raise AuthenticationError(\nlitellm.exceptions.AuthenticationError: OpenAIException - Error code: 401 - {\'error\': {\'message\': \'Incorrect API key provided: none. You can find your API key at https://platform.openai.com/account/api-keys.\', \'type\': \'invalid_request_error\', \'param\': None, \'code\': \'invalid_api_key\'}}\n')
Making streaming request to proxy
INFO: 127.0.0.1:35604 - "POST /chat/completions HTTP/1.1" 200 OK
[GIN] 2023/11/16 - 22:27:06 | 404 | 3.788µs | 127.0.0.1 | POST "/chat/completions"
ERROR: Exception in ASGI application
Traceback (most recent call last):
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment