tiny_starcoder_py / README.md
loubnabnl's picture
loubnabnl HF staff
update FIM tokens and examples (#3)
8004d8f
|
raw
history blame
2.7 kB
metadata
pipeline_tag: text-generation
inference: true
widget:
  - text: 'def print_hello_world():'
    example_title: Hello world
    group: Python
license: bigcode-openrail-m
datasets:
  - bigcode/the-stack-dedup
metrics:
  - code_eval
library_name: transformers
tags:
  - code
model-index:
  - name: Tiny-StarCoder-Py
    results:
      - task:
          type: text-generation
        dataset:
          type: openai_humaneval
          name: HumanEval
        metrics:
          - name: pass@1
            type: pass@1
            value: 7.84%
            verified: false

TinyStarCoderPy

This is a 159M parameters model with the same architecture as StarCoder (8k context length, MQA & FIM). It was trained on the Python data from StarCoderData for ~6 epochs which amounts to 100B tokens.

Use

Intended use

The model was trained on GitHub code, to assist with some tasks like Assisted Generation. For pure code completion, we advise using our 15B models StarCoder or StarCoderBase.

Generation

# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer

checkpoint = "bigcode/tiny_starcoder_py"
device = "cuda" # for GPU usage or "cpu" for CPU usage

tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)

inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))

Fill-in-the-middle

Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:

input_text = "<fim_prefix>def print_one_two_three():\n    print('one')\n    <fim_suffix>\n    print('three')<fim_middle>"
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))

Training

Model

  • Architecture: GPT-2 model with multi-query attention and Fill-in-the-Middle objective
  • Pretraining steps: 50k
  • Pretraining tokens: 100 billion
  • Precision: bfloat16

Hardware

  • GPUs: 32 Tesla A100
  • Training time: 18 hours

Software

License

The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement here.