Model Info

This model was developed/finetuned for product review task for Turkish Language. Model was finetuned via hepsiburada.com product review dataset.

  • LABEL_0: negative review
  • LABEL_1: positive review

Model Sources

Preprocessing

You must apply removing stopwords, stemming, or lemmatization process for Turkish.

Results

  • Accuracy: %92.54

Citation

BibTeX:

@INPROCEEDINGS{9559007, author={Guven, Zekeriya Anil}, booktitle={2021 6th International Conference on Computer Science and Engineering (UBMK)}, title={The Effect of BERT, ELECTRA and ALBERT Language Models on Sentiment Analysis for Turkish Product Reviews}, year={2021}, volume={}, number={}, pages={629-632}, keywords={Computer science;Sentiment analysis;Analytical models;Computational modeling;Bit error rate;Time factors;Random forests;Sentiment Analysis;Language Model;Product Review;Machine Learning;E-commerce}, doi={10.1109/UBMK52708.2021.9559007}}

APA:

Guven, Z. A. (2021, September). The effect of bert, electra and albert language models on sentiment analysis for turkish product reviews. In 2021 6th International Conference on Computer Science and Engineering (UBMK) (pp. 629-632). IEEE.

Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train anilguven/electra_tr_turkish_product_reviews

Collection including anilguven/electra_tr_turkish_product_reviews