mbrack's picture
Update README.md
6d92440 verified
|
raw
history blame
7.15 kB
metadata
license: apache-2.0
language:
  - en
  - de
pipeline_tag: text-generation

image/png

Occiglot-7B-DE-EN-Instruct

A polyglot language model for the Occident.

Occiglot-7B-DE-EN-Instruct is a the instruct version of occiglot-7b-eu5, a generative language model with 7B parameters supporting German and English and trained by the Occiglot Research Collective. It was trained on 180M tokens of additional multilingual and code instructions. Note that the model was not safety aligned and might generate problematic outputs.

This is the first release of an ongoing open research project for multilingual language models. If you want to train a model for your own language or are working on evaluations, please contact us or join our Discord server. We are open for collaborations!

Special thanks go to Disco Research and Björn Plüster for sharing the German dataset with us

Model details

  • Instruction tuned from: occiglot-7b-de-en
  • Model type: Causal decoder-only transformer language model
  • Languages: English, German, and code.
  • License: Apache 2.0
  • Compute resources: DFKI cluster
  • Contributors: Manuel Brack, Patrick Schramowski, Pedro Ortiz, Malte Ostendorff, Fabio Barth, Georg Rehm, Kristian Kersting
  • Research labs: Occiglot with support from SAINT and SLT
  • Contact: Discord

How to use

The model was trained using the chatml instruction template. You can use the transformers chat template feature for interaction. Since the generation relies on some randomness, we set a seed for reproducibility:

>>> from transformers import AutoTokenizer, MistralForCausalLM, set_seed
>>> tokenizer = AutoTokenizer.from_pretrained("occiglot/occiglot-7b-eu5-instruct")
>>> model = MistralForCausalLM.from_pretrained('occiglot/occiglot-7b-eu5-instruct')  # You may want to use bfloat16 and/or move to GPU here
>>> set_seed(42)
>>> messages = [
>>>    {"role": "system", 'content': 'You are a helpful assistant. Please give short and concise answers.'},
>>>    {"role": "user", "content": "Wer ist der deutsche Bundeskanzler?"},
>>> ]
>>> tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_dict=False, return_tensors='pt',)
>>> set_seed(42)
>>> outputs = model.generate(tokenized_chat.to('cuda'), max_new_tokens=200,)
>>> tokenizer.decode(out[0][len(tokenized_chat[0]):])
'Der deutsche Bundeskanzler ist Olaf Scholz.'

Dataset

The training data was split evenly amongst the 5 languages based on the total number of tokens. We would like to thank Disco Research and Björn Plüster for making their dataset available to us.

English and Code

German

Training settings

  • Full instruction fine-tuning on 8xH100.
  • 0.6 - 4 training epochs (depending on dataset sampling).
  • Framework: axolotl
  • Precision: bf16
  • Optimizer: AdamW
  • Global batch size: 128 (with 8192 context length)
  • Cosine Annealing with Warmup

Tokenizer

Tokenizer is unchanged from Mistral-7B-v0.1.

Evaluation

Preliminary evaluation results can be found below. Please note that the non-English results are based on partially machine-translated datasets and English prompts (Belebele and Okapi framework) and thus should be interpreted with caution, e.g., biased towards English model performance. Currently, we are working on more suitable benchmarks for Spanish, French, German, and Italian.

Evaluation results

German

arc_challenge_de belebele_de hellaswag_de mmlu_de truthfulqa_de avg
occiglot/occiglot-7b-eu5 0.493584 0.646667 0.666631 0.483406 0.251269 0.508311
occiglot/occiglot-7b-eu5-instruct 0.529512 0.667778 0.685205 0.488234 0.286802 0.531506
occiglot/occiglot-7b-de-en 0.50556 0.743333 0.67421 0.514633 0.26269 0.540085
occiglot/occiglot-7b-de-en-instruct 0.54491 0.772222 0.688407 0.515915 0.310914 0.566474
LeoLM/leo-mistral-hessianai-7b 0.474765 0.691111 0.682109 0.488309 0.252538 0.517766
mistralai/Mistral-7B-v0.1 0.476476 0.738889 0.610589 0.529567 0.284264 0.527957
mistralai/Mistral-7B-Instruct-v0.2 0.485885 0.688889 0.622438 0.501961 0.376904 0.535215

Acknowledgements

The pre-trained model training was supported by a compute grant at the 42 supercomputer which is a central component in the development of hessian AI, the AI Innovation Lab (funded by the Hessian Ministry of Higher Education, Research and the Art (HMWK) & the Hessian Ministry of the Interior, for Security and Homeland Security (HMinD)) and the AI Service Centers (funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK)). The curation of the training data is partially funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) through the project OpenGPT-X (project no. 68GX21007D).

License

Apache 2.0

See also