Dataset Viewer
Auto-converted to Parquet
instruction
stringclasses
1 value
input
stringlengths
2
30
output
stringlengths
103
135
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
sikien
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["sikien"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Good morning
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Zeitung
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Avenue
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
amina koyayım
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["amina koyayım"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
ıspanak
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Desktop
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
muschi
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["muschi"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
onanieren
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["onanieren"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
kackbratze
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["kackbratze"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
boklar
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["boklar"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
hardcore
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["hardcore"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Concert
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Bildschirm
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
schnackeln
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["schnackeln"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
biber
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Universität
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
shytty
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["shytty"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Berlin
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
öğrenim
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
domaltmak
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["domaltmak"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
kackbratze
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["kackbratze"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Credit card
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
analarn
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["analarn"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Winter
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Bank
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Music
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Bus
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Tastatur
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Cost
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Handy
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Markt
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
amcik
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["amcik"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
armağan
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
pasaport
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Picture
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Winter
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
University
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
sokarm
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["sokarm"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Train
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Gebäude
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
vibrator
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["vibrator"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
schwuchtel
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["schwuchtel"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
rüzgar
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
nutte
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["nutte"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Büro
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
t i ts
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["t i ts"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
dalyarak
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["dalyarak"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
tittchen
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["tittchen"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
kacke
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["kacke"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
zikim
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["zikim"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
cuck
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["cuck"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
tasarım
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
wichsen
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["wichsen"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Autobahn
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
titten
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["titten"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
bratze
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["bratze"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Hello
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
sikini
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["sikini"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Train
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
pirinç
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
mufti
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["mufti"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
bakanlık
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Herbst
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
peynir
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Kühl
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Evening
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
fratze
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["fratze"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
nlggor
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["nlggor"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Picture
{"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
masterbaiter
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["masterbaiter"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Universität
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
scallywag
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["scallywag"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
domalt
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["domalt"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
wichsen
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["wichsen"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
möpse
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["möpse"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
sahip
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
mufti
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["mufti"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Bargeld
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Willkommen
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
kütüphane
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
amk çocuğu
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["amk çocuğu"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
skime
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["skime"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
babası pezevenk
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["babası pezevenk"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
shit-ass
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["shit-ass"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Supermarkt
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
manav
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
lümmel
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["lümmel"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
Musik
{"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
doggie-style
{"has_profanity": true, "profanity_language": "en", "profanity_words": ["doggie-style"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
satıcı
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
kacken
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["kacken"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
babanı
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["babanı"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
siktimin
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["siktimin"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
bonze
{"has_profanity": true, "profanity_language": "de", "profanity_words": ["bonze"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
gotundeki
{"has_profanity": true, "profanity_language": "tr", "profanity_words": ["gotundeki"], "confidence_level": "99.5%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
aksesuar
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
doktor
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
savcı
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure: { "has_profanity": <boolean>, "profanity_language": "<string (language code)>", "profanity_words": [<list of strings, if any>], "confidence_level": "<string (percentage, e.g., 99.5%)>" } Ensure your response is valid JSON.
kasım
{"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"}
README.md exists but content is empty.
Downloads last month
39