instruction
stringclasses
1 value
input
stringlengths
2
18
output
stringlengths
103
123
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
huge fat
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["huge fat"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
kampflesbe
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["kampflesbe"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Morning
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Fahrrad
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
penis
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["penis"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
yarrağım
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["yarrağım"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Brief
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
schwuchtel
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["schwuchtel"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Sprechen
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Angebot
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Spring
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Spring
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
titten
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["titten"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
ovum
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["ovum"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
siksinbaya
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["siksinbaya"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
orgasmus
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["orgasmus"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
junglebunny
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["junglebunny"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
brown shower
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["brown shower"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Warm
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
reudig
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["reudig"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
pizda
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["pizda"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
kackwurst
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["kackwurst"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Cold
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
bacndan
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["bacndan"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
kackbratze
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["kackbratze"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Brief
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
cockknocker
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["cockknocker"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
yarran
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["yarran"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
muschi
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["muschi"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
squa
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["squa"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Studio
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
masterbate
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["masterbate"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Bank
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
demokrasi
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
arschlecker
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["arschlecker"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Sommer
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
yöntem
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
gypos
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["gypos"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
shitoutofluck
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["shitoutofluck"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
fotze
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["fotze"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Temperatur
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
aşama
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Restaurant
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
nigger
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["nigger"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Tuesday
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Artist
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Today
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Camera
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
injuns
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["injuns"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
wichser
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["wichser"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Image
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
bastard
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["bastard"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
reudig
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["reudig"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
motherfuckka
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["motherfuckka"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
wichse
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["wichse"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
muschi
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["muschi"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
dragqueen
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["dragqueen"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Doktor
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Rabatt
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
stupidfucker
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["stupidfucker"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
vollpfosten
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["vollpfosten"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
kayısı
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
arsch
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["arsch"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
hastane
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
strap on
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["strap on"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
üzüntü
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
bumsen
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["bumsen"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
kampüs
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Kamera
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Sale
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Bicycle
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
plaj
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
doggie-style
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["doggie-style"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
pisin
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["pisin"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
macaca
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["macaca"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
wichser
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["wichser"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
siktigiminin
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["siktigiminin"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
kacke
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["kacke"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
hava
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Music
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
ische
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["ische"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Restaurant
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Laden
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
nisan
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Preis
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Welcome
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
nackt
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["nackt"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Artikel
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
yeşil
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
osur
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["osur"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
havuç
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
sebep
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
morgenlatte
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["morgenlatte"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
mother fukkah
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["mother fukkah"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
skum
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["skum"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
ische
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["ische"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
va1jina
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["va1jina"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
pici
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["pici"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Bus
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
fuckpig
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["fuckpig"], "confidence_level": "99.5%"}