Last active
October 31, 2023 22:48
-
-
Save colinricardo/667930a30e01f03f730baa92794a414a to your computer and use it in GitHub Desktop.
spec
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
openapi: 3.0.0 | |
info: | |
title: Conjecture API | |
description: The Conjecture REST API. | |
version: 0.0.1 | |
termsOfService: "https://conjecture.dev/terms-of-use" | |
contact: | |
name: Conjecture Support | |
url: "https://help.conjecture.dev/" | |
servers: | |
- url: "https://api.conjecture.dev/v1" | |
tags: | |
- name: Audio | |
description: Turn audio into text, and text into audio. | |
- name: Chat | |
description: Talk to our models. | |
- name: Embeddings | |
description: >- | |
Get a vector representation of a given input that can be easily consumed | |
by machine learning models and algorithms. | |
- name: Models | |
description: List and describe the various models available in the API. | |
paths: | |
/chat/completions: | |
post: | |
operationId: createChatCompletion | |
tags: | |
- Chat | |
summary: Creates a model response for the given chat conversation. | |
requestBody: | |
required: true | |
content: | |
application/json: | |
schema: | |
$ref: "#/components/schemas/CreateChatCompletionRequest" | |
responses: | |
"200": | |
description: OK | |
content: | |
application/json: | |
schema: | |
$ref: "#/components/schemas/CreateChatCompletionResponse" | |
/embeddings: | |
post: | |
operationId: createEmbedding | |
tags: | |
- Embeddings | |
summary: Creates an embedding vector representing the input text. | |
requestBody: | |
required: true | |
content: | |
application/json: | |
schema: | |
$ref: "#/components/schemas/CreateEmbeddingRequest" | |
responses: | |
"200": | |
description: OK | |
content: | |
application/json: | |
schema: | |
$ref: "#/components/schemas/CreateEmbeddingResponse" | |
/audio/transcriptions: | |
post: | |
operationId: createTranscription | |
tags: | |
- Audio | |
summary: Transcribes audio into the input language. | |
requestBody: | |
required: true | |
content: | |
multipart/form-data: | |
schema: | |
$ref: "#/components/schemas/CreateTranscriptionRequest" | |
responses: | |
"200": | |
description: OK | |
content: | |
application/json: | |
schema: | |
$ref: "#/components/schemas/CreateTranscriptionResponse" | |
/audio/speech: | |
post: | |
operationId: createSpeech | |
tags: | |
- Audio | |
summary: Converts text to speech. | |
requestBody: | |
required: true | |
content: | |
application/json: | |
schema: | |
$ref: "#/components/schemas/CreateSpeechRequest" | |
responses: | |
"200": | |
description: OK | |
content: | |
audio/mpeg: | |
schema: | |
$ref: "#/components/schemas/CreateSpeechResponse" | |
/audio/voices: | |
get: | |
operationId: getVoices | |
tags: | |
- Audio | |
summary: Retrieves available voices. | |
responses: | |
"200": | |
description: Successful Response | |
content: | |
application/json: | |
schema: | |
$ref: "#/components/schemas/VoicesResponse" | |
/models: | |
get: | |
operationId: listModels | |
tags: | |
- Models | |
summary: >- | |
Lists the currently available models, and provides basic information | |
about each one such as the owner and availability. | |
responses: | |
"200": | |
description: OK | |
content: | |
application/json: | |
schema: | |
$ref: "#/components/schemas/ListModelsResponse" | |
"/models/{model}": | |
get: | |
operationId: retrieveModel | |
tags: | |
- Models | |
summary: >- | |
Retrieves a model instance, providing basic information about the model | |
such as permissioning. | |
parameters: | |
- in: path | |
name: model | |
required: true | |
schema: | |
type: string | |
example: conjecture-1-fast | |
description: The ID of the model to use for this request | |
responses: | |
"200": | |
description: OK | |
content: | |
application/json: | |
schema: | |
$ref: "#/components/schemas/Model" | |
components: | |
securitySchemes: | |
ApiKeyAuth: | |
type: http | |
scheme: bearer | |
schemas: | |
ErrorResponse: | |
type: object | |
properties: | |
code: | |
type: string | |
nullable: true | |
message: | |
type: string | |
nullable: false | |
required: | |
- message | |
- code | |
ListModelsResponse: | |
type: object | |
properties: | |
data: | |
type: array | |
items: | |
$ref: "#/components/schemas/Model" | |
required: | |
- data | |
ChatCompletionRequestMessage: | |
type: object | |
properties: | |
content: | |
type: string | |
nullable: true | |
description: The contents of the message. `content` is required for all messages. | |
role: | |
type: string | |
enum: | |
- system | |
- user | |
- assistant | |
description: >- | |
The role of the messages author. One of `system`, `user`, | |
`assistant`. | |
required: | |
- content | |
- role | |
ChatCompletionResponseMessage: | |
type: object | |
description: A chat completion message generated by the model. | |
properties: | |
content: | |
type: string | |
description: The contents of the message. | |
nullable: true | |
role: | |
type: string | |
enum: | |
- system | |
- user | |
- assistant | |
description: The role of the author of this message. | |
required: | |
- role | |
- content | |
ChatCompletionStreamResponseDelta: | |
type: object | |
description: A chat completion delta generated by streamed model responses. | |
properties: | |
content: | |
type: string | |
description: The contents of the chunk message. | |
nullable: true | |
role: | |
type: string | |
enum: | |
- system | |
- user | |
- assistant | |
description: The role of the author of this message. | |
CreateChatCompletionRequest: | |
type: object | |
properties: | |
messages: | |
description: >- | |
A list of messages comprising the conversation so far. [Example | |
Python | |
code](https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models). | |
type: array | |
minItems: 1 | |
items: | |
$ref: "#/components/schemas/ChatCompletionRequestMessage" | |
model: | |
description: "ID of the model to use. " | |
example: conjecture-1-fast | |
anyOf: | |
- type: string | |
- type: string | |
enum: | |
- conjecture-1-fast | |
- conjecture-1 | |
- conjecture-2 | |
frequency_penalty: | |
type: number | |
default: 0 | |
minimum: -2 | |
maximum: 2 | |
nullable: true | |
description: >- | |
Number between -2.0 and 2.0. Positive values penalize new tokens | |
based on their existing frequency in the text so far, decreasing the | |
model's likelihood to repeat the same line verbatim. | |
logprobs: | |
type: number | |
default: 0 | |
minimum: 0 | |
maximum: 2 | |
nullable: true | |
description: >- | |
The number of log-probabilities to return for each token. | |
logit_bias: | |
type: object | |
default: null | |
nullable: true | |
additionalProperties: | |
type: integer | |
description: >- | |
Modify the likelihood of specified tokens appearing in the | |
completion. | |
Accepts a json object that maps tokens (specified by their token ID | |
in the tokenizer) to an associated bias value from -100 to 100. | |
Mathematically, the bias is added to the logits generated by the | |
model prior to sampling. The exact effect will vary per model, but | |
values between -1 and 1 should decrease or increase likelihood of | |
selection; values like -100 or 100 should result in a ban or | |
exclusive selection of the relevant token. | |
max_tokens: | |
description: >- | |
The maximum number of tokens to generate in the chat completion. | |
The total length of input tokens and generated tokens is limited by | |
the model's context length. | |
default: inf | |
type: integer | |
nullable: true | |
"n": | |
type: integer | |
minimum: 1 | |
maximum: 2048 | |
default: 1 | |
example: 1 | |
nullable: true | |
description: How many chat completion choices to generate for each input message. | |
presence_penalty: | |
type: number | |
default: 0 | |
minimum: 0 | |
maximum: 10 | |
nullable: true | |
description: > | |
Number between 0.0 and 10.0. Higher values penalize new tokens based | |
on whether they appear in the text so far, increasing the model's | |
likelihood to talk about new topics. | |
stop: | |
description: >- | |
Up to 4 sequences where the API will stop generating further tokens. | |
default: null | |
oneOf: | |
- type: string | |
nullable: true | |
- type: array | |
minItems: 1 | |
maxItems: 4 | |
items: | |
type: string | |
stream: | |
description: > | |
If set, partial message deltas will be sent, like in ChatGPT. Tokens | |
will be sent as data-only [server-sent | |
events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) | |
as they become available, with the stream terminated by a `data: | |
[DONE]` message. | |
type: boolean | |
nullable: true | |
default: false | |
temperature: | |
type: number | |
minimum: 0 | |
maximum: 3 | |
default: 1 | |
example: 1 | |
nullable: true | |
description: >- | |
What sampling temperature to use, between 0 and 3. Higher values | |
like 0.8 will make the output more random, while lower values like | |
0.2 will make it more focused and deterministic. | |
We generally recommend altering this or `top_p` but not both. | |
top_p: | |
type: number | |
minimum: 0 | |
maximum: 1 | |
default: 1 | |
example: 1 | |
nullable: true | |
description: >- | |
An alternative to sampling with temperature, called nucleus | |
sampling, where the model considers the results of the tokens with | |
top_p probability mass. So 0.1 means only the tokens comprising the | |
top 10% probability mass are considered. | |
We generally recommend altering this or `temperature` but not both. | |
top_k: | |
type: number | |
minimum: 0 | |
maximum: 10000 | |
default: 0 | |
example: 0 | |
nullable: true | |
description: > | |
Sets the number of highest probability tokens considered for | |
sampling. | |
required: | |
- model | |
- messages | |
CreateChatCompletionResponse: | |
type: object | |
description: >- | |
Represents a chat completion response returned by model, based on the | |
provided input. | |
properties: | |
id: | |
type: string | |
description: A unique identifier for the chat completion. | |
choices: | |
type: array | |
description: >- | |
A list of chat completion choices. Can be more than one if `n` is | |
greater than 1. | |
items: | |
type: object | |
required: | |
- finish_reason | |
- index | |
- message | |
properties: | |
finish_reason: | |
type: string | |
description: >- | |
The reason the model stopped generating tokens. | |
enum: | |
- stop | |
- length | |
- content_filter | |
index: | |
type: integer | |
description: The index of the choice in the list of choices. | |
message: | |
$ref: "#/components/schemas/ChatCompletionResponseMessage" | |
created: | |
type: integer | |
description: >- | |
The Unix timestamp (in seconds) of when the chat completion was | |
created. | |
model: | |
type: string | |
description: The model used for the chat completion. | |
usage: | |
$ref: "#/components/schemas/CompletionUsage" | |
required: | |
- choices | |
- created | |
- id | |
- model | |
CreateChatCompletionStreamResponse: | |
type: object | |
description: >- | |
Represents a streamed chunk of a chat completion response returned by | |
model, based on the provided input. | |
properties: | |
id: | |
type: string | |
description: >- | |
A unique identifier for the chat completion. Each chunk has the same | |
ID. | |
choices: | |
type: array | |
description: >- | |
A list of chat completion choices. Can be more than one if `n` is | |
greater than 1. | |
items: | |
type: object | |
required: | |
- delta | |
- finish_reason | |
- index | |
properties: | |
delta: | |
$ref: "#/components/schemas/ChatCompletionStreamResponseDelta" | |
finish_reason: | |
type: string | |
description: >- | |
The reason the model stopped generating tokens. | |
enum: | |
- stop | |
- length | |
nullable: true | |
index: | |
type: integer | |
description: The index of the choice in the list of choices. | |
created: | |
type: integer | |
description: >- | |
The Unix timestamp (in seconds) of when the chat completion was | |
created. Each chunk has the same timestamp. | |
model: | |
type: string | |
description: The model to generate the completion. | |
required: | |
- choices | |
- created | |
- id | |
- model | |
CreateEmbeddingRequest: | |
type: object | |
additionalProperties: false | |
properties: | |
input: | |
description: > | |
Input text to embed, encoded as a string. | |
example: The quick brown fox jumped over the lazy dog | |
oneOf: | |
- type: string | |
default: "" | |
example: This is a test. | |
embedding_type: | |
description: "Type of the embedding, either for retrieval or indexing." | |
example: retrieval | |
type: string | |
enum: | |
- retrieval | |
- indexing | |
required: | |
- model | |
- input | |
- embedding_type | |
CreateEmbeddingResponse: | |
type: object | |
properties: | |
data: | |
type: array | |
description: The list of embeddings generated by the model. | |
items: | |
$ref: "#/components/schemas/Embedding" | |
model: | |
type: string | |
description: The name of the model used to generate the embedding. | |
usage: | |
type: object | |
description: The usage information for the request. | |
properties: | |
prompt_tokens: | |
type: integer | |
description: The number of tokens used by the prompt. | |
total_tokens: | |
type: integer | |
description: The total number of tokens used by the request. | |
required: | |
- prompt_tokens | |
- total_tokens | |
required: | |
- model | |
- data | |
- usage | |
CreateTranscriptionRequest: | |
type: object | |
additionalProperties: false | |
properties: | |
file: | |
description: > | |
The audio file object (not file name) to transcribe, in one of these | |
formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm. | |
type: string | |
format: binary | |
language: | |
description: > | |
The language of the input audio. Supplying the input language in | |
[ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) | |
format will improve accuracy and latency. | |
type: string | |
diarisation: | |
description: "Whether or not to split up and identify speakers. " | |
type: boolean | |
required: | |
- file | |
CreateTranscriptionResponse: | |
type: object | |
properties: | |
text: | |
type: string | |
segments: | |
type: array | |
items: | |
$ref: "#/components/schemas/Segment" | |
language: | |
type: string | |
required: | |
- text | |
- segments | |
- language | |
CreateSpeechRequest: | |
properties: | |
text: | |
type: string | |
description: Text to convert into speech. | |
voice_id: | |
type: string | |
description: Identifier for the voice to use. | |
required: | |
- text | |
- voice_id | |
CreateSpeechResponse: | |
type: string | |
format: binary | |
VoicesResponse: | |
type: array | |
items: | |
type: object | |
properties: | |
voice_id: | |
type: string | |
description: | |
type: string | |
Model: | |
title: Model | |
description: Describes an Conjecture model offering that can be used with the API. | |
properties: | |
id: | |
type: string | |
description: "The model identifier, which can be referenced in the API endpoints." | |
created: | |
type: integer | |
description: The Unix timestamp (in seconds) when the model was created. | |
required: | |
- id | |
- created | |
Embedding: | |
type: object | |
description: >- | |
Represents an embedding vector returned by embedding endpoint. | |
properties: | |
index: | |
type: integer | |
description: The index of the embedding in the list of embeddings. | |
embedding: | |
type: array | |
description: >- | |
The embedding vector, which is a list of floats. | |
items: | |
type: number | |
required: | |
- index | |
- embedding | |
CompletionUsage: | |
type: object | |
description: Usage statistics for the completion request. | |
properties: | |
completion_tokens: | |
type: integer | |
description: Number of tokens in the generated completion. | |
prompt_tokens: | |
type: integer | |
description: Number of tokens in the prompt. | |
total_tokens: | |
type: integer | |
description: Total number of tokens used in the request (prompt + completion). | |
required: | |
- prompt_tokens | |
- completion_tokens | |
- total_tokens | |
Segment: | |
type: object | |
properties: | |
text: | |
type: string | |
start: | |
type: number | |
format: float | |
end: | |
type: number | |
format: float | |
words: | |
type: array | |
items: | |
$ref: "#/components/schemas/Word" | |
speaker: | |
type: integer | |
nullable: true | |
Word: | |
type: object | |
properties: | |
word: | |
type: string | |
start: | |
type: number | |
format: float | |
end: | |
type: number | |
format: float | |
security: | |
- ApiKeyAuth: [] |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment