electra-base-cyberbullying

This is a BERT Base model for the Japanese language finetuned for automatic cyberbullying detection.

The model was based on daigo's BERT Base for Japanese sentiment analysis, and later finetuned on a balanced dataset created by unifying two datasets, namely "Harmful BBS Japanese comments dataset" and "Twitter Japanese cyberbullying dataset".

Licenses

The finetuned model with all attached files is licensed under CC BY-SA 4.0, or Creative Commons Attribution-ShareAlike 4.0 International License.

Creative Commons License

Citations

Please, cite this model using the following citation.

@inproceedings{tanabe2022bert-base-cyberbullying,
  title={北見工業大学 テキスト情報処理研究室 BERT Base ネットいじめ検出モデル (Daigo ver.)}, 
  author={田邊 威裕 and プタシンスキ ミハウ and エロネン ユーソ and 桝井 文人}, 
  publisher={HuggingFace}, 
  year={2022},
  url = "https://huggingface.co/kit-nlp/bert-base-japanese-sentiment-cyberbullying"
}
Downloads last month
95
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.