curl --request POST \
--url https://api.openai.com/v1/moderations \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '{
"input": "I want to kill them.",
"model": "omni-moderation-2024-09-26"
}'
{
"id": "<string>",
"model": "<string>",
"results": [
{
"flagged": true,
"categories": {
"hate": true,
"hate/threatening": true,
"harassment": true,
"harassment/threatening": true,
"illicit": true,
"illicit/violent": true,
"self-harm": true,
"self-harm/intent": true,
"self-harm/instructions": true,
"sexual": true,
"sexual/minors": true,
"violence": true,
"violence/graphic": true
},
"category_scores": {
"hate": 123,
"hate/threatening": 123,
"harassment": 123,
"harassment/threatening": 123,
"illicit": 123,
"illicit/violent": 123,
"self-harm": 123,
"self-harm/intent": 123,
"self-harm/instructions": 123,
"sexual": 123,
"sexual/minors": 123,
"violence": 123,
"violence/graphic": 123
},
"category_applied_input_types": {
"hate": [
"text"
],
"hate/threatening": [
"text"
],
"harassment": [
"text"
],
"harassment/threatening": [
"text"
],
"illicit": [
"text"
],
"illicit/violent": [
"text"
],
"self-harm": [
"text"
],
"self-harm/intent": [
"text"
],
"self-harm/instructions": [
"text"
],
"sexual": [
"text"
],
"sexual/minors": [
"text"
],
"violence": [
"text"
],
"violence/graphic": [
"text"
]
}
}
]
}
Classifies if text and/or image inputs are potentially harmful. Learn more in the moderation guide.
curl --request POST \
--url https://api.openai.com/v1/moderations \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '{
"input": "I want to kill them.",
"model": "omni-moderation-2024-09-26"
}'
{
"id": "<string>",
"model": "<string>",
"results": [
{
"flagged": true,
"categories": {
"hate": true,
"hate/threatening": true,
"harassment": true,
"harassment/threatening": true,
"illicit": true,
"illicit/violent": true,
"self-harm": true,
"self-harm/intent": true,
"self-harm/instructions": true,
"sexual": true,
"sexual/minors": true,
"violence": true,
"violence/graphic": true
},
"category_scores": {
"hate": 123,
"hate/threatening": 123,
"harassment": 123,
"harassment/threatening": 123,
"illicit": 123,
"illicit/violent": 123,
"self-harm": 123,
"self-harm/intent": 123,
"self-harm/instructions": 123,
"sexual": 123,
"sexual/minors": 123,
"violence": 123,
"violence/graphic": 123
},
"category_applied_input_types": {
"hate": [
"text"
],
"hate/threatening": [
"text"
],
"harassment": [
"text"
],
"harassment/threatening": [
"text"
],
"illicit": [
"text"
],
"illicit/violent": [
"text"
],
"self-harm": [
"text"
],
"self-harm/intent": [
"text"
],
"self-harm/instructions": [
"text"
],
"sexual": [
"text"
],
"sexual/minors": [
"text"
],
"violence": [
"text"
],
"violence/graphic": [
"text"
]
}
}
]
}
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
OK
Represents if a given text input is potentially harmful.
Was this page helpful?