File size: 2,301 Bytes
c1cab11
 
 
 
 
 
 
 
 
 
 
 
33788bc
 
 
 
 
 
 
 
 
c1cab11
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
datasets:
- mvasiliniuc/iva-swift-codeint-clean-train
- mvasiliniuc/iva-swift-codeint-clean-valid
language:
- code 
tags:
- gpt2
- code
- swift
- mobile
- generation
widget:
- text: "/*\n  A function that returns the time zone currently configured on the device.\n*/\n"
  example_title: "Get the current time zone of the device"
- text: "/*\n  A function that returns the current version of the operating system.\n*/\n"
  example_title: "Get current device operating system"
- text: "/* \nA function that fires an NSNotification named 'MyUpdate'. \n*/\npublic func post"
  example_title: "Post NSNotification"
- text: "/* \nA public function that saves a given String value in UserPreference at a given String key.\n*/\n"
  example_title: "Save to UserPreference"
---
iva-codeint-swift-small GPT-2 is (small version - 239.4M parameters) trained from scratch to obtain results in the text-to-code task tailored for Swift language used
in native mobile development (iOS).

## Usage

```Python
from transformers import pipeline

pipe = pipeline("text-generation", model="mvasiliniuc/iva-codeint-swift-small")
outputs = pipe("func triggerNSNotification")

```

### Inference
```Python
API_URL = "https://api-inference.huggingface.co/models/mvasiliniuc/iva-codeint-swift-small"
headers = {"Authorization": "Bearer <key>"}
def query(payload):
	response = requests.post(API_URL, headers=headers, json=payload)
	return response.json()

output = query({
"inputs": """
/* 
A function that gets the current device operating system.
*/
"""
})
pprint.pprint(output, compact=True)
```

## Training

| Config | Value |
|------|------------------|
| seq length | 1024 |
| weight decay | 0.1 |
| learning rate | 0.0005 |
| max eval steps | -1 |
| shuffle buffer | 10000 |
| max train steps | 150000 |
| mixed precision | fp16 |
| num warmup steps | 2000 |
| train batch size | 5 |
| valid batch size | 5 |
| lr scheduler type | cosine |
| save checkpoint steps | 15000 |
| gradient checkpointing | false |
| gradient accumulation steps | 1 |

## Resources

Resources used for research: 
* [Training a causal language model from scratch](https://huggingface.co/learn/nlp-course/chapter7/6)
* [CodeParrot a GPT-2 model (1.5B parameters) trained to generate Python code](https://huggingface.co/codeparrot/codeparrot)