curl --request POST \
--url https://api.openai.com/v1/fine_tuning/jobs \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"model": "gpt-4o-mini",
"training_file": "file-abc123",
"hyperparameters": {
"batch_size": "auto",
"learning_rate_multiplier": "auto",
"n_epochs": "auto"
},
"suffix": null,
"validation_file": "file-abc123",
"integrations": [
{
"type": "wandb",
"wandb": {
"project": "my-wandb-project",
"name": "<string>",
"entity": "<string>",
"tags": [
"custom-tag"
]
}
}
],
"seed": 42,
"method": {
"type": "supervised",
"supervised": {
"hyperparameters": {
"batch_size": "auto",
"learning_rate_multiplier": "auto",
"n_epochs": "auto"
}
},
"dpo": {
"hyperparameters": {
"beta": "auto",
"batch_size": "auto",
"learning_rate_multiplier": "auto",
"n_epochs": "auto"
}
}
},
"metadata": {}
}
'{
"id": "<string>",
"created_at": 123,
"error": {
"code": "<string>",
"message": "<string>",
"param": "<string>"
},
"fine_tuned_model": "<string>",
"finished_at": 123,
"hyperparameters": {
"batch_size": "auto",
"learning_rate_multiplier": "auto",
"n_epochs": "auto"
},
"model": "<string>",
"object": "fine_tuning.job",
"organization_id": "<string>",
"result_files": [
"file-abc123"
],
"status": "validating_files",
"trained_tokens": 123,
"training_file": "<string>",
"validation_file": "<string>",
"seed": 123,
"integrations": [
{
"type": "wandb",
"wandb": {
"project": "my-wandb-project",
"name": "<string>",
"entity": "<string>",
"tags": [
"custom-tag"
]
}
}
],
"estimated_finish": 123,
"method": {
"type": "supervised",
"supervised": {
"hyperparameters": {
"batch_size": "auto",
"learning_rate_multiplier": "auto",
"n_epochs": "auto"
}
},
"dpo": {
"hyperparameters": {
"beta": "auto",
"batch_size": "auto",
"learning_rate_multiplier": "auto",
"n_epochs": "auto"
}
}
},
"metadata": {}
}Creates a fine-tuning job which begins the process of creating a new model from a given dataset.
Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete.
curl --request POST \
--url https://api.openai.com/v1/fine_tuning/jobs \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"model": "gpt-4o-mini",
"training_file": "file-abc123",
"hyperparameters": {
"batch_size": "auto",
"learning_rate_multiplier": "auto",
"n_epochs": "auto"
},
"suffix": null,
"validation_file": "file-abc123",
"integrations": [
{
"type": "wandb",
"wandb": {
"project": "my-wandb-project",
"name": "<string>",
"entity": "<string>",
"tags": [
"custom-tag"
]
}
}
],
"seed": 42,
"method": {
"type": "supervised",
"supervised": {
"hyperparameters": {
"batch_size": "auto",
"learning_rate_multiplier": "auto",
"n_epochs": "auto"
}
},
"dpo": {
"hyperparameters": {
"beta": "auto",
"batch_size": "auto",
"learning_rate_multiplier": "auto",
"n_epochs": "auto"
}
}
},
"metadata": {}
}
'{
"id": "<string>",
"created_at": 123,
"error": {
"code": "<string>",
"message": "<string>",
"param": "<string>"
},
"fine_tuned_model": "<string>",
"finished_at": 123,
"hyperparameters": {
"batch_size": "auto",
"learning_rate_multiplier": "auto",
"n_epochs": "auto"
},
"model": "<string>",
"object": "fine_tuning.job",
"organization_id": "<string>",
"result_files": [
"file-abc123"
],
"status": "validating_files",
"trained_tokens": 123,
"training_file": "<string>",
"validation_file": "<string>",
"seed": 123,
"integrations": [
{
"type": "wandb",
"wandb": {
"project": "my-wandb-project",
"name": "<string>",
"entity": "<string>",
"tags": [
"custom-tag"
]
}
}
],
"estimated_finish": 123,
"method": {
"type": "supervised",
"supervised": {
"hyperparameters": {
"batch_size": "auto",
"learning_rate_multiplier": "auto",
"n_epochs": "auto"
}
},
"dpo": {
"hyperparameters": {
"beta": "auto",
"batch_size": "auto",
"learning_rate_multiplier": "auto",
"n_epochs": "auto"
}
}
},
"metadata": {}
}Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
The name of the model to fine-tune. You can select one of the supported models.
"gpt-4o-mini"
The ID of an uploaded file that contains training data.
See upload file for how to upload a file.
Your dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose fine-tune.
The contents of the file should differ depending on if the model uses the chat, completions format, or if the fine-tuning method uses the preference format.
See the fine-tuning guide for more details.
"file-abc123"
The hyperparameters used for the fine-tuning job.
This value is now deprecated in favor of method, and should be passed in under the method parameter.
Show child attributes
Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.
auto Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.
auto The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
auto A string of up to 64 characters that will be added to your fine-tuned model name.
For example, a suffix of "custom-model-name" would produce a model name like ft:gpt-4o-mini:openai:custom-model-name:7p4lURel.
1 - 64The ID of an uploaded file that contains validation data.
If you provide this file, the data is used to generate validation metrics periodically during fine-tuning. These metrics can be viewed in the fine-tuning results file. The same data should not be present in both train and validation files.
Your dataset must be formatted as a JSONL file. You must upload your file with the purpose fine-tune.
See the fine-tuning guide for more details.
"file-abc123"
A list of integrations to enable for your fine-tuning job.
Show child attributes
The type of integration to enable. Currently, only "wandb" (Weights and Biases) is supported.
wandb The settings for your integration with Weights and Biases. This payload specifies the project that metrics will be sent to. Optionally, you can set an explicit display name for your run, add tags to your run, and set a default entity (team, username, etc) to be associated with your run.
Show child attributes
The name of the project that the new run will be created under.
"my-wandb-project"
A display name to set for the run. If not set, we will use the Job ID as the name.
The entity to use for the run. This allows you to set the team or username of the WandB user that you would like associated with the run. If not set, the default entity for the registered WandB API key is used.
A list of tags to be attached to the newly created run. These tags are passed through directly to WandB. Some default tags are generated by OpenAI: "openai/finetune", "openai/{base-model}", "openai/{ftjob-abcdef}".
The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases. If a seed is not specified, one will be generated for you.
0 <= x <= 214748364742
The method used for fine-tuning.
Show child attributes
The type of method. Is either supervised or dpo.
supervised, dpo Configuration for the supervised fine-tuning method.
Show child attributes
The hyperparameters used for the fine-tuning job.
Show child attributes
Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.
auto Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.
auto The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
auto Configuration for the DPO fine-tuning method.
Show child attributes
The hyperparameters used for the fine-tuning job.
Show child attributes
The beta value for the DPO method. A higher beta value will increase the weight of the penalty between the policy and reference model.
auto Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.
auto Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.
auto The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
auto Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
Show child attributes
OK
The fine_tuning.job object represents a fine-tuning job that has been created through the API.
The object identifier, which can be referenced in the API endpoints.
The Unix timestamp (in seconds) for when the fine-tuning job was created.
For fine-tuning jobs that have failed, this will contain more information on the cause of the failure.
Show child attributes
The name of the fine-tuned model that is being created. The value will be null if the fine-tuning job is still running.
The Unix timestamp (in seconds) for when the fine-tuning job was finished. The value will be null if the fine-tuning job is still running.
The hyperparameters used for the fine-tuning job. This value will only be returned when running supervised jobs.
Show child attributes
Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.
auto Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.
auto The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
auto The base model that is being fine-tuned.
The object type, which is always "fine_tuning.job".
fine_tuning.job The organization that owns the fine-tuning job.
The current status of the fine-tuning job, which can be either validating_files, queued, running, succeeded, failed, or cancelled.
validating_files, queued, running, succeeded, failed, cancelled The total number of billable tokens processed by this fine-tuning job. The value will be null if the fine-tuning job is still running.
The seed used for the fine-tuning job.
A list of integrations to enable for this fine-tuning job.
5Show child attributes
The type of the integration being enabled for the fine-tuning job
wandb The settings for your integration with Weights and Biases. This payload specifies the project that metrics will be sent to. Optionally, you can set an explicit display name for your run, add tags to your run, and set a default entity (team, username, etc) to be associated with your run.
Show child attributes
The name of the project that the new run will be created under.
"my-wandb-project"
A display name to set for the run. If not set, we will use the Job ID as the name.
The entity to use for the run. This allows you to set the team or username of the WandB user that you would like associated with the run. If not set, the default entity for the registered WandB API key is used.
A list of tags to be attached to the newly created run. These tags are passed through directly to WandB. Some default tags are generated by OpenAI: "openai/finetune", "openai/{base-model}", "openai/{ftjob-abcdef}".
The Unix timestamp (in seconds) for when the fine-tuning job is estimated to finish. The value will be null if the fine-tuning job is not running.
The method used for fine-tuning.
Show child attributes
The type of method. Is either supervised or dpo.
supervised, dpo Configuration for the supervised fine-tuning method.
Show child attributes
The hyperparameters used for the fine-tuning job.
Show child attributes
Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.
auto Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.
auto The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
auto Configuration for the DPO fine-tuning method.
Show child attributes
The hyperparameters used for the fine-tuning job.
Show child attributes
The beta value for the DPO method. A higher beta value will increase the weight of the penalty between the policy and reference model.
auto Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance.
auto Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting.
auto The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
auto Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
Show child attributes
Was this page helpful?