|
--- |
|
license: bigcode-openrail-m |
|
datasets: |
|
- OpenAssistant/oasst1 |
|
- databricks/databricks-dolly-15k |
|
language: |
|
- en |
|
library_name: transformers |
|
tags: |
|
- code |
|
--- |
|
|
|
# Model Card for StarChat Alpha |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
StarChat is a series of language models that are fine-tuned from StarCoder to act as helpful coding assistants. StarChat Alpha is the first of these models, and as an alpha release is only intended for educational or research purpopses. In particular, the model has not been aligned to human preferences with techniques like RLHF, so may generate problematic content (especially when prompted to do so). |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
- **Model type:** A 16B parameter GPT-like model fine-tuned on a blend of the [`oasst1`](https://huggingface.co/datasets/OpenAssistant/oasst1) and [`databricks-dolly-15k`](https://huggingface.co/datasets/databricks/databricks-dolly-15k) datasets. |
|
- **Language(s) (NLP):** English |
|
- **License:** BigCode Open RAIL-M v1 |
|
- **Finetuned from model:** [bigcode/starcoderbase](https://huggingface.co/bigcode/starcoderbase) |
|
|
|
### Model Sources [optional] |
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
- **Repository:** https://github.com/bigcode-project/starcoder |
|
- **Demo:** https://huggingface.co/spaces/HuggingFaceH4/starchat-playground |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
|
|
StarChat Alpha is intended for educational and/or research purposes and in that respect can be used to probe the programming capabilities of open-source language models. |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
|
Like most language models, StarChat Alpha will often hallucinate facts or generate problematic outputs (especially when prompted to do so). |
|
|
|
## How to Get Started with the Model |
|
|
|
Use the code below to get started with the model. |
|
|
|
```python |
|
from transformers import pipeline |
|
|
|
pipe = pipeline("text-generation", model="HuggingFaceH4/starchat-alpha") |
|
# Inputs use chat tokens |
|
inputs = "<|system|>\n<|end|>\n<|user|>How can I sort a list in Python?<|end|>\n<|assistant|>" |
|
outputs = pipe(inputs) |
|
``` |
|
|
|
|
|
|
|
## Citation [optional] |
|
|
|
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> |
|
|
|
**BibTeX:** |
|
|
|
``` |
|
@article{Tunstall2023starchat-alpha, |
|
author = {Tunstall, Lewis and Lambert, Nathan and Rajani, Nazneen and Beeching, Edward and Le Scao, Teven and von Werra, Leandro and Han, Sheon and Schmid, Philipp and Rush, Alexander}, |
|
title = {Creating a Coding Assistant with StarCoder}, |
|
journal = {Hugging Face Blog}, |
|
year = {2023}, |
|
note = {https://huggingface.co/blog/starchat}, |
|
} |
|
``` |