codenlbert-tiny / README.md
vishnun's picture
Create README.md
2caf5a6
|
raw
history blame
881 Bytes
metadata
license: mit
datasets:
  - vishnun/CodevsNL
language:
  - en
metrics:
  - accuracy
library_name: transformers
pipeline_tag: text-classification
tags:
  - code
  - nli

PreFace

Code vs Natural language classification using bert-small from prajwall, below are the metrics achieved

Training Metrics

Epoch Training Loss Validation Loss Accuracy
1 0.022500 0.012705 0.997203
2 0.008700 0.013107 0.996880
3 0.002700 0.014081 0.997633
4 0.001800 0.010666 0.997526
5 0.000900 0.010800 0.998063

More