instruction
stringclasses 1
value | input
stringlengths 2
18
| output
stringlengths 103
123
|
---|---|---|
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | sekreter | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | sikiiimmm | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["sikiiimmm"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | pohms | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["pohms"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | arschficker | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["arschficker"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | quim | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["quim"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | harita | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | pinkeln | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["pinkeln"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | amk çocuğu | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["amk çocuğu"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | alaskan pipeline | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["alaskan pipeline"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | kalıcı | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | onanieren | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["onanieren"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Auto | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | bumsen | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["bumsen"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Herbst | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | schwanzlutscher | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["schwanzlutscher"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | yourpenis | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["yourpenis"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | f_u_c_k | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["f_u_c_k"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Student | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | mufti | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["mufti"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | beastility | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["beastility"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | yazı | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Desktop | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | russkie | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["russkie"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | nippel | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["nippel"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Weather | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | personel | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | karhane | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["karhane"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Report | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | sikiyorum | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["sikiyorum"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | ische | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["ische"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Cost | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | fick | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["fick"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | asker | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | koyarm | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["koyarm"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | wichsen | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["wichsen"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | coital | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["coital"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | merak | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | nimphomania | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["nimphomania"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | avukat | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | style doggy | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["style doggy"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | ghey | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["ghey"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Keyboard | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | chodes | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["chodes"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | hure | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["hure"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Producer | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | yarpie | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["yarpie"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Tuesday | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | mal | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["mal"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | penis | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["penis"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Hello | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | yaaraaa | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["yaaraaa"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Please | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | zeytin | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | bızır | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["bızır"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Report | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | mabunos | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["mabunos"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | boku | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["boku"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | New York | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Abbildung | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | t i ts | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["t i ts"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | nippel | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["nippel"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Music | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | semt | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | döner | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | amuğa | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["amuğa"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | götlalesi | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["götlalesi"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | soused | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["soused"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | hardcore | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["hardcore"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | başkan | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Theater | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Band | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Berlin | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Artist | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | üniversite | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | numune | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | koduumun | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["koduumun"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | schiesser | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["schiesser"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | salatalık | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Temperatur | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Alexanderplatz 7 | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | muz | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Bildschirm | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | kimme | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["kimme"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Kreditkarte | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | skem | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["skem"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | fingerfood | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["fingerfood"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Rain | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Weather | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | bong | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["bong"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | ocak | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | sünnet | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | makbuz | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Phone | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Arzt | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | one guy | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["one guy"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | genital | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["genital"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Bus | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | ipne | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["ipne"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | fuckersucker | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["fuckersucker"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | fasulye | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Subsets and Splits