File size: 2,110 Bytes
e5c691d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 |
---
datasets:
- mvasiliniuc/iva-kotlin-codeint-clean-train
- mvasiliniuc/iva-kotlin-codeint-clean-valid
language:
- code
tags:
- gpt2
- code
- kotlin
- mobile
- generation
widget:
- text: "/**\n\t* A function that returns the version of the current operating system.\n*/\n"
example_title: "Get current device operating system"
- text: "/**\n\t* A function that returns the current TimeZone.\n*/\n"
example_title: "Get current timezone"
- text: "/**\n\t* A data class representing a Bank Account.\n*/\n"
example_title: "Data Class - BankAccount"
---
iva-codeint-kotlin-small GPT-2 is (small version - 239.4M parameters) trained from scratch to obtain results in the text-to-code task tailored for Kotlin language used
in native mobile development (Android).
## Usage
```Python
from transformers import pipeline
pipe = pipeline("text-generation", model="mvasiliniuc/iva-codeint-kotlin-small")
outputs = pipe("fun printToConsole()")
```
### Inference
```Python
API_URL = "https://api-inference.huggingface.co/models/mvasiliniuc/iva-codeint-kotlin-small"
headers = {"Authorization": "Bearer <key>"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
output = query({
"inputs": """
/**
* A public function that returns the current version of the operating system.
*/
"""
})
pprint.pprint(output, compact=True)
```
## Training
| Config | Value |
|------|------------------|
| seq length | 1024 |
| weight decay | 0.1 |
| learning rate | 0.0005 |
| max eval steps | -1 |
| shuffle buffer | 10000 |
| max train steps | 150000 |
| mixed precision | fp16 |
| num warmup steps | 2000 |
| train batch size | 5 |
| valid batch size | 5 |
| lr scheduler type | cosine |
| save checkpoint steps | 15000 |
| gradient checkpointing | false |
| gradient accumulation steps | 1 |
## Resources
Resources used for research:
* [Training a causal language model from scratch](https://huggingface.co/learn/nlp-course/chapter7/6)
* [CodeParrot a GPT-2 model (1.5B parameters) trained to generate Python code](https://huggingface.co/codeparrot/codeparrot)
|