File size: 2,244 Bytes
b681cdf
 
 
 
aaf6b7d
 
 
0ea841e
 
 
 
 
 
 
 
 
 
 
88fe318
 
 
 
 
 
 
 
5358462
88fe318
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d2414e7
 
 
 
 
 
 
 
 
 
 
 
0ea841e
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---
language: 
  - code
---

# BLOOM (560M ckpt) fine-tuned on The Stack RUST code

- Latest ckpt: https://huggingface.co/mrm8488/bloom-560m-finetuned-the-stack-rust/tree/100k

## Model
[BigScience Large Open-science Open-access Multilingual Language Model](https://huggingface.co/bigscience/bloom-560m#model-details)

## Dataset

**Rust** part of The [Stack](https://huggingface.co/datasets/bigcode/the-stack).

The Stack contains over 6TB of permissively-licensed source code files covering 358 programming languages. The dataset was created as part of the BigCode Project, an open scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs). The Stack serves as a pre-training dataset for Code LLMs, i.e., code-generating AI systems which enable the synthesis of programs from natural language descriptions as well as other from code snippets.

## Example of usage

```py
import torch
from transformers import BloomTokenizerFast, BloomForCausalLM

device = 'cuda' if torch.cuda.is_available() else 'cpu'
ckpt = 'mrm8488/bloom-560m-finetuned-the-stack-rust'
revision = '100k' # latest one at the moment 

tokenizer = BloomTokenizerFast.from_pretrained(ckpt)
model = BloomForCausalLM.from_pretrained(ckpt, revision=revision).to(device)

def complete_code(text):
    inputs = tokenizer(text, return_tensors='pt')
    input_ids = inputs.input_ids.to(device)
    attention_mask = inputs.attention_mask.to(device)
    output = model.generate(input_ids, attention_mask=attention_mask, max_length=2048, eos_token_id=tokenizer.eos_token_id)

    return tokenizer.decode(output[0], skip_special_tokens=False)
    
code_prompt = """
use fastly::{Error, Request, Response};
use serde_json::{json, Value};

#[fastly::main]
fn main(req: Request) -> Result<Response, Error> {
  let mut response = req.send("origin_0")?;
"""

complete_code(code_prompt)
```

## Citation
```
@misc {manuel_romero_2022,
	author       = { {Manuel Romero} },
	title        = { bloom-560m-finetuned-the-stack-rust (Revision 5358462) },
	year         = 2022,
	url          = { https://huggingface.co/mrm8488/bloom-560m-finetuned-the-stack-rust },
	doi          = { 10.57967/hf/0236 },
	publisher    = { Hugging Face }
}
```