curl --request POST \
--url https://api.openai.com/v1/audio/translations \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: multipart/form-data' \
--form file='@example-file' \
--form model=whisper-1 \
--form 'prompt=<string>' \
--form response_format=json \
--form temperature=0{
"text": "<string>"
}Translates audio into English.
curl --request POST \
--url https://api.openai.com/v1/audio/translations \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: multipart/form-data' \
--form file='@example-file' \
--form model=whisper-1 \
--form 'prompt=<string>' \
--form response_format=json \
--form temperature=0{
"text": "<string>"
}Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
The audio file object (not file name) translate, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
ID of the model to use. Only whisper-1 (which is powered by our open source Whisper V2 model) is currently available.
"whisper-1"
The format of the output, in one of these options: json, text, srt, verbose_json, or vtt.
json, text, srt, verbose_json, vtt The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.
OK
Was this page helpful?