Parameter Details - Openai ChatGPT Documentation

Content

You can interact with the API from any language via HTTP requests, using our official Python bindings, official Node.js library, or community-maintained libraries.

To install the official Python bindings, run the following command:

pip install openai

To install the official Node.js library, please run the following command in your Node.js project directory:

npm install openai

Authentication

OpenAI API uses an API key for authentication. Visit your API key page to retrieve the API key used in your requests.

Please remember, your API key is confidential! Do not share it with others or expose it in any client-side code (browser, application). Production requests must be routed through your own backend server, where you can securely load your API key from environment variables or key management services.

All API requests should include your API key in the Authorization HTTP header, in the following format:

Authorization: Bearer OPENAI_API_KEY

Request organization

For users who belong to multiple organizations, you can pass a header to specify the organization to be used for API requests. The usage of these API requests will count towards the subscription quota of the specified organization.

Example curl command:

Example of using the openai Python package:

Example of using the OpenAI Node.js package:

You can find the organization ID on your organization settings page.

Request sent[](#Request sent)

You can paste the following command into the terminal to run the first API request. Make sure to replace $OPENAI_API_KEY with your secret API key.

This request queries the gpt-3.5-turbo model to complete the text starting from the prompt 'Say this is a test'. You should receive a response similar to the following:

Now you have generated the first chat completion. We can see that finish_reason is stop, which means the API returned a complete completion generated by the model. In the above request, we only generated one message, but you can set the n parameter to generate multiple message options. In this example, gpt-3.5-turbo was used for a more traditional text completion task. The model is also optimized for chat applications.

Model

List and describe the various models available in the API. You can refer to the model documentation to understand the available models and their differences.

List models [](#List models)

List the available models and provide basic information about each model, such as owner and availability.

Retrieval model

GET ttps://api.openai.com/v1/models/{model}

Retrieve model instances, providing basic information about the model, such as owner and permissions.

Path Parameters

Model string required

Model ID used for this request

Complete

Given the prompt, the model will return one or more completed predictions and can also return the probability of alternative tokens at each position.

Creation completed

Create a completion item for the provided hint and parameters.

Request Body

model string required

Model ID to be used. You can use the 'List Models' API to view all available models, or refer to our 'Model Overview' for their descriptions.

prompt string or array Optional Default is

To generate completion suggestions, encode as a string, an array of strings, an array of tokens, or an array of arrays of tokens.

suffix string Optional Default is null

Insert the suffix that appears after the text is inserted.

max_tokens Integer Optional Default is 16

The maximum number of marks generated in completion.

Your prompt text and the number of tokens for max_tokens cannot exceed the model's context length. The context length for most models is 2048 tokens (except for the latest models that support 4096 tokens).

temperature number optional default is 1

Which sampling temperature to use, between 0 and 2. A higher value, such as 0.8, will make the output more random, while a lower value, such as 0.2, will make it more concentrated and deterministic.

We usually recommend changing this parameter or the top_p parameter, but not both at the same time.

top_p number optional defaults to 1

A method called nucleus sampling is used as an alternative to temperature sampling, where the model considers token outputs with a probability mass of top_p. Therefore, 0.1 indicates that only tokens composing the top 10% probability mass are considered.

We usually recommend modifying this or temperature but not both at the same time.

n Integer Optional Default is 1