GET
/
chat
/
completions
/
{completion_id}
Get chat completion
curl --request GET \
  --url https://api.openai.com/v1/chat/completions/{completion_id} \
  --header 'Authorization: Bearer <token>'
{
  "id": "<string>",
  "choices": [
    {
      "finish_reason": "stop",
      "index": 123,
      "message": {
        "content": "<string>",
        "refusal": "<string>",
        "tool_calls": [
          {
            "id": "<string>",
            "type": "function",
            "function": {
              "name": "<string>",
              "arguments": "<string>"
            }
          }
        ],
        "annotations": [
          {
            "type": "url_citation",
            "url_citation": {
              "end_index": 123,
              "start_index": 123,
              "url": "<string>",
              "title": "<string>"
            }
          }
        ],
        "role": "assistant",
        "function_call": {
          "arguments": "<string>",
          "name": "<string>"
        },
        "audio": {
          "id": "<string>",
          "expires_at": 123,
          "data": "<string>",
          "transcript": "<string>"
        }
      },
      "logprobs": {
        "content": [
          {
            "token": "<string>",
            "logprob": 123,
            "bytes": [
              123
            ],
            "top_logprobs": [
              {
                "token": "<string>",
                "logprob": 123,
                "bytes": [
                  "<any>"
                ]
              }
            ]
          }
        ],
        "refusal": [
          {
            "token": "<string>",
            "logprob": 123,
            "bytes": [
              123
            ],
            "top_logprobs": [
              {
                "token": "<string>",
                "logprob": 123,
                "bytes": [
                  "<any>"
                ]
              }
            ]
          }
        ]
      }
    }
  ],
  "created": 123,
  "model": "<string>",
  "service_tier": "auto",
  "system_fingerprint": "<string>",
  "object": "chat.completion",
  "usage": {
    "completion_tokens": 0,
    "prompt_tokens": 0,
    "total_tokens": 0,
    "completion_tokens_details": {
      "accepted_prediction_tokens": 0,
      "audio_tokens": 0,
      "reasoning_tokens": 0,
      "rejected_prediction_tokens": 0
    },
    "prompt_tokens_details": {
      "audio_tokens": 0,
      "cached_tokens": 0
    }
  }
}

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Path Parameters

completion_id
string
required

The ID of the chat completion to retrieve.

Response

200 - application/json

A chat completion

Represents a chat completion response returned by model, based on the provided input.

id
string
required

A unique identifier for the chat completion.

choices
object[]
required

A list of chat completion choices. Can be more than one if n is greater than 1.

created
integer
required

The Unix timestamp (in seconds) of when the chat completion was created.

model
string
required

The model used for the chat completion.

object
enum<string>
required

The object type, which is always chat.completion.

Available options:
chat.completion
service_tier
enum<string> | null
default:auto

Specifies the latency tier to use for processing the request. This parameter is relevant for customers subscribed to the scale tier service:

  • If set to 'auto', and the Project is Scale tier enabled, the system will utilize scale tier credits until they are exhausted.
  • If set to 'auto', and the Project is not Scale tier enabled, the request will be processed using the default service tier with a lower uptime SLA and no latency guarentee.
  • If set to 'default', the request will be processed using the default service tier with a lower uptime SLA and no latency guarentee.
  • If set to 'economy', the request will be processed with the Economy service tier at the specified price. Economy requests have longer response times and may experience Resource Unavailable errors. See the docs for more information.
  • When not set, the default behavior is 'auto'.

When this parameter is set, the response body will include the service_tier utilized.

Available options:
auto,
default,
economy
system_fingerprint
string

This fingerprint represents the backend configuration that the model runs with.

Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.

usage
object

Usage statistics for the completion request.