instruction
stringclasses
1 value
input
stringlengths
2
18
output
stringlengths
103
123
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
eatmyass
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["eatmyass"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
tig ol bitties
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["tig ol bitties"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Allee
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
sktiimin
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["sktiimin"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
döner
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
pusies
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["pusies"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Geld
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
kacke
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["kacke"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
donkey punch
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["donkey punch"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Train
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
bo obs
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["bo obs"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
domalmak
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["domalmak"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
assbanger
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["assbanger"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
dava
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
nackt
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["nackt"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
ders
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Car
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
ficken
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["ficken"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
kimme
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["kimme"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Cash
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
amcıklama
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["amcıklama"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
tittchen
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["tittchen"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
ev
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
kıyafet
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Montag
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
donkey punch
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["donkey punch"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Shop
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
mufti
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["mufti"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
onanieren
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["onanieren"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
kumbullbe
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["kumbullbe"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Film
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Gestern
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
bina
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Reden
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
ammak
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["ammak"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
porno
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["porno"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
arsch
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["arsch"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
yardım
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
kampflesbe
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["kampflesbe"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
eczane
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
voyuer
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["voyuer"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
amcklaryla
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["amcklaryla"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
balık
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Telefon
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
knobed
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["knobed"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
University
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
yarraaaa
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["yarraaaa"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
salaak
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["salaak"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
gyopos
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["gyopos"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
dödel
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["dödel"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
fanculo
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["fanculo"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
University
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Walk
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
bumsen
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["bumsen"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
shithapens
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["shithapens"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
oyun
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
başarı
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
iyi
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Actor
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
durak
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
scheisser
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["scheisser"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
pimpern
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["pimpern"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
şoför
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
flittchen
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["flittchen"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
diaper heads
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["diaper heads"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
oldukça
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Road
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Keyboard
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Sale
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
morgenlatte
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["morgenlatte"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
pezevengin evladı
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["pezevengin evladı"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
arschficker
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["arschficker"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
salaak
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["salaak"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
pisser
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["pisser"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
coğrafya
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
nutte
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["nutte"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
sorumluluk
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Money
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Tastatur
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
cumtart
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["cumtart"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
nutte
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["nutte"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
sağlık
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Regisseur
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Letter
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
ching chong
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["ching chong"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
khokhol
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["khokhol"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
donkeyribber
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["donkeyribber"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
oğlu it
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["oğlu it"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Markt
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
ebleh
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["ebleh"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
tity
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["tity"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
orospu çoc
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["orospu çoc"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
muayene
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
sikleri
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["sikleri"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
kasım
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Computer
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
sikien
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["sikien"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Video
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
kuffar
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["kuffar"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
rentafuck
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["rentafuck"], "confidence_level": "99.5%"}