|
--- |
|
license: wtfpl |
|
datasets: |
|
- cakiki/rosetta-code |
|
language: |
|
- en |
|
metrics: |
|
- accuracy |
|
library_name: transformers |
|
pipeline_tag: text-classification |
|
tags: |
|
- code |
|
- programming-language |
|
- code-classification |
|
base_model: huggingface/CodeBERTa-small-v1 |
|
--- |
|
This Model is a fine-tuned version of *huggingface/CodeBERTa-small-v1* on *cakiki/rosetta-code* Dataset for 25 Programming Languages as mentioned below. |
|
## Training Details: |
|
Model is trained for 25 epochs on Azure for nearly 26000 Datapoints for above Mentioned 25 Programming Languages<br> extracted from Dataset having 1006 of total Programming Language. |
|
### Programming Languages this model is able to detect vs Examples used for training |
|
<ol> |
|
<li>'ARM Assembly'</li> |
|
<li>'AppleScript'</li> |
|
<li>'C'</li> |
|
<li>'C#'</li> |
|
<li>'C++'</li> |
|
<li>'COBOL'</li> |
|
<li>'Erlang'</li> |
|
<li>'Fortran'</li> |
|
<li>'Go'</li> |
|
<li>'Java'</li> |
|
<li>'JavaScript'</li> |
|
<li>'Kotlin'</li> |
|
<li>'Lua</li> |
|
<li>'Mathematica/Wolfram Language'</li> |
|
<li>'PHP'</li> |
|
<li>'Pascal'</li> |
|
<li>'Perl'</li> |
|
<li>'PowerShell'</li> |
|
<li>'Python'</li> |
|
<li>'R</li> |
|
<li>'Ruby'</li> |
|
<li>'Rust'</li> |
|
<li>'Scala'</li> |
|
<li>'Swift'</li> |
|
<li>'Visual Basic .NET'</li> |
|
<li>'jq'</li> |
|
</ol> |
|
<br> |
|
|
|
## Below is the Training Result for 25 epochs. |
|
<ul> |
|
<li>Training Computer Configuration: GPU:1xNvidia Tesla T4, VRam: 16GB, Ram:112GB,Cores:6 Cores </li> |
|
<li>Training Time taken: exactly 7 hours for 25 epochs</li> |
|
<li>Training Hyper-parameters: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/645c859ad90782b1a6a3e957/yRqjKVFKZIT_zXjcA3yFW.png) </li> |
|
</ul> |
|
|
|
|
|
![training detail.png](https://cdn-uploads.huggingface.co/production/uploads/645c859ad90782b1a6a3e957/Oi9TuJ8nEjtt6Z_W56myn.png) |