instruction
stringclasses 1
value | input
stringlengths 2
18
| output
stringlengths 103
123
|
---|---|---|
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | hackfresse | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["hackfresse"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Winter | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | ambiti | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["ambiti"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | mutfak | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Gebäude | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | muschi | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["muschi"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | sekis | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["sekis"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Good morning | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | orgasmus | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["orgasmus"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | sh1t | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["sh1t"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | bastard | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["bastard"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Printer | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Preis | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Gestern | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | hurensohn | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["hurensohn"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | scheisser | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["scheisser"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | gonzagas | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["gonzagas"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | düğün | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | dilekçe | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | reudig | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["reudig"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | bratze | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["bratze"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | kart | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | kacken | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["kacken"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | oğlu it | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["oğlu it"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Zug | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Büro | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Photo | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | meatrack | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["meatrack"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | picler | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["picler"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | sikimtrak | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["sikimtrak"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Brief | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Monday | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | geber | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["geber"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | mal | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["mal"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | ficken | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["ficken"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | titten | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["titten"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | toplarm | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["toplarm"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | mor | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | bilet | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | pimmel | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["pimmel"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | pimpern | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["pimpern"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Morning | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | donkeypunch | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["donkeypunch"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | sharmute | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["sharmute"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | dödel | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["dödel"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | sikilmi | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["sikilmi"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | teknoloji | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | clusterfuck | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["clusterfuck"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | bokubokuna | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["bokubokuna"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | götoğlanı | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["götoğlanı"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Winter | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | jap | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["jap"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Büro | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Bank | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | syphilis | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["syphilis"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | sokarim | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["sokarim"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | gtelek | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["gtelek"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | limy | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["limy"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | diktim | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["diktim"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | godammit | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["godammit"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | domalık | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["domalık"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | attrrm | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["attrrm"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Theater | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | gaysex | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["gaysex"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Danke | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | mufti | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["mufti"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | möpse | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["möpse"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | mggor | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["mggor"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | sh!t | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["sh!t"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Arzt | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Band | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | möpse | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["möpse"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Office | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Konzert | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Thank you | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | sexs | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["sexs"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | c-0-c-k | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["c-0-c-k"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | kackwurst | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["kackwurst"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | February | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Walk | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | scheiße | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["scheiße"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | wichse | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["wichse"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Keyboard | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | bratze | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["bratze"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | tipinizi s.keyim | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["tipinizi s.keyim"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | pisser | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["pisser"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | sıkıntı | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | yarran | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["yarran"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | c-o-c-k | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["c-o-c-k"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | weledizina | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["weledizina"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | jungle bunny | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["jungle bunny"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | kelime | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Computer | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | January | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | ahmak | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["ahmak"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Arzt | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | February | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | telefon | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | kanal | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | pisser | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["pisser"], "confidence_level": "99.5%"} |
End of preview. Expand
in Dataset Viewer.
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/datasets-cards)
Multilingual Profanity Detection Dataset
Multilingual profanity detection dataset in Alpaca format for fine-tuning LLMs. Contains examples in English, German, and Turkish with both benign and toxic content.
Format
The dataset follows the Alpaca instruction format with:
instruction
: The task descriptioninput
: Text to analyzeoutput
: JSON result with profanity analysis
Languages
The dataset includes examples in English (en), German (de), and Turkish (tr).
- Downloads last month
- 73