Enum for the available models in the OpenAI API.

It is possible that there are other models available in the OpenAI API that are not listed in the OpenAiModel enum. To get an up-to-date list of available models, you can consult the OpenAI API documentation or make a request to the API's /models endpoint.

Enumeration Members

ADA: "text-ada-001"

Capable of very simple tasks, usually the fastest model in the GPT-3 series, and lowest cost.

String presentation recognized by the OpenAI REST API: text-ada-001

Recommended default values:

  • max_tokens: 2,048
  • temperature: 0.5
  • top_p: 1
  • frequency_penalty: 0
  • presence_penalty: 0

Link for extra information: https://beta.openai.com/docs/models/gpt-3

Remarks:

  • The max_tokens parameter specifies the maximum number of tokens that the model is allowed to generate in its response.

Alias names: ada (older version)

BABBAGE: "text-babbage-001"

Capable of straightforward tasks, very fast, and lower cost.

String presentation recognized by the OpenAI REST API: text-babbage-001

Recommended default values:

  • max_tokens: 2,048
  • temperature: 0.5
  • top_p: 1
  • frequency_penalty: 0
  • presence_penalty: 0

Link for extra information: https://beta.openai.com/docs/models/gpt-3

Remarks:

  • The max_tokens parameter specifies the maximum number of tokens that the model is allowed to generate in its response.

Alias names: babbage (older version)

CODEX: "code-davinci-002"

A set of models that can understand and generate code, including translating natural language to code (Limited Beta).

String presentation recognized by the OpenAI REST API: codex

Link for extra information: https://beta.openai.com/docs/models/codex

Recommended default values:

  • max_tokens: 8000
  • temperature: 0.5
  • top_p: 1
  • frequency_penalty: 0
  • presence_penalty: 0

Remarks:

  • This model is currently in limited beta.
CONTENT_FILTER: "content-filter-alpha"

A fine-tuned model that can detect whether text may be sensitive or unsafe.

String presentation recognized by the OpenAI REST API: content-filter

Recommended default values:

  • max_tokens: 1
  • temperature: 0.0
  • top_p: 0
  • frequency_penalty: 0
  • presence_penalty: 0
  • logprobs: 0

Wrap your prompt in the following way:

"<|endoftext|>[prompt]\n--\nLabel:"

Link for extra information: https://beta.openai.com/docs/models/content-filter

CURIE: "text-curie-001"

Very capable GPT-3 model, but faster and lower cost than Davinci.

String presentation recognized by the OpenAI REST API: text-curie-001

Recommended default values:

  • max_tokens: 2,048
  • temperature: 0.5
  • top_p: 1
  • frequency_penalty: 0
  • presence_penalty: 0

Link for extra information: https://beta.openai.com/docs/models/gpt-3

Remarks:

  • This model can perform many of the same tasks as Davinci, but faster and for 1/10th the cost.
  • The max_tokens parameter specifies the maximum number of tokens that the model is allowed to generate in its response.

Alias names: curie (older version)

DAVINCI: "text-davinci-003"

The most capable GPT-3 model. Can do any task the other models can do, often with higher quality, longer output and better instruction-following. Also supports inserting completions within text.

String presentation recognized by the OpenAI REST API: text-davinci-003

Recommended default values:

  • max_tokens: 4,000
  • temperature: 0.5
  • top_p: 1
  • frequency_penalty: 0
  • presence_penalty: 0

Link for extra information: https://beta.openai.com/docs/models/gpt-3

Remarks:

  • This model is generally the most capable in the GPT-3 series.
  • The max_tokens parameter specifies the maximum number of tokens that the model is allowed to generate in its response.

Alias names: davinci (older version)

DAVINCI_EDIT_CODE: "code-davinci-edit-001"
DAVINCI_EDIT_TEXT: "text-davinci-edit-001"

Generated using TypeDoc