Text Generation
Transformers
Safetensors
English
olmo
conversational
Inference Endpoints
hamishivi commited on
Commit
3f9e038
Β·
verified Β·
1 Parent(s): fc6da70

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -11
README.md CHANGED
@@ -3,6 +3,7 @@ license: apache-2.0
3
  datasets:
4
  - allenai/dolma
5
  - allenai/tulu-v2-sft-mixture
 
6
  language:
7
  - en
8
  ---
@@ -11,7 +12,7 @@ language:
11
  <img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
12
 
13
 
14
- # Model Card for OLMo 1.7 7B SFT
15
 
16
  **Requires transformers versions v4.40.0 or newer**
17
 
@@ -20,7 +21,8 @@ OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the scie
20
  The OLMo base models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset.
21
  The adapted versions are trained on the [Tulu SFT mixture](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture) and, for the Instruct version, a [cleaned version of the UltraFeedback dataset](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned).
22
 
23
- OLMo 1.7 7B Instruct and OLMo SFT are two adapted versions of these models trained for better question answering.
 
24
  They show the performance gain that OLMo base models can achieve with existing fine-tuning techniques.
25
 
26
  ## Model Details
@@ -28,13 +30,13 @@ They show the performance gain that OLMo base models can achieve with existing f
28
  We release two adapted model versions:
29
  | Model | Training Method(s) | Datasets | Context Length |
30
  |------|--------|---------|--|
31
- | [OLMo 1.7 7B SFT](https://huggingface.co/allenai/OLMo-1.7-7B-SFT-hf) | SFT | [Tulu 2 SFT Mix](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture) | 2048 |
32
- | [OLMo 1.7 7B Instruct](https://huggingface.co/allenai/OLMo-1.7-7B-Instruct-hf) | SFT + DPO | [Tulu 2 SFT Mix](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture) + [Ultrafeedback Cleaned](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned) | 2048 |
33
 
34
- These models are both trained on top of OLMo 1.7 7b:
35
  | Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
36
  |------|--------|---------|-------------|-----------------|----------------|
37
- | [OLMo 1.7 7B](https://huggingface.co/allenai/OLMo-1.7-7B-hf) | 2.05 Trillion |32 | 4096 | 32 | 4096 |
38
 
39
 
40
  ### Model Description
@@ -68,8 +70,8 @@ You can run these models using recent (>= 4.40) versions of transformers.
68
 
69
  ```python
70
  from transformers import AutoModelForCausalLM, AutoTokenizer
71
- olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-1.7-7B-SFT-hf")
72
- tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-1.7-7B-SFT-hf")
73
  chat = [
74
  { "role": "user", "content": "What is language modeling?" },
75
  ]
@@ -95,9 +97,10 @@ Core model results for the 7B adapted models are found below.
95
 
96
  | Model | MMLU 0-shot ↑ | AlpacaEval %win ↑ | ToxiGen % Toxic ↓ | TruthfulQA %Info+True ↑ |
97
  |-----------------------|---------------|--------------------|--------------------|-------------------------|
98
- | **OLMo 1.7 base** | 47.5 | - | 83.2 | 25.7 |
99
- | **[OLMo 1.7 7B SFT](https://huggingface.co/allenai/OLMo-1.7-7B-SFT-hf)** | 52.4 | 70.4 | 0.5 | 38.8 |
100
- | **[OLMo 1.7 7B Instruct](https://huggingface.co/allenai/OLMo-1.7-7B-Instruct-hf)** | 52.4 | 82.2 | 0.2 | 75.6
 
101
 
102
 
103
  ## Model Details
 
3
  datasets:
4
  - allenai/dolma
5
  - allenai/tulu-v2-sft-mixture
6
+ - allenai/ultrafeedback_binarized_cleaned
7
  language:
8
  - en
9
  ---
 
12
  <img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
13
 
14
 
15
+ # Model Card for OLMo 7B April 2024 SFT
16
 
17
  **Requires transformers versions v4.40.0 or newer**
18
 
 
21
  The OLMo base models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset.
22
  The adapted versions are trained on the [Tulu SFT mixture](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture) and, for the Instruct version, a [cleaned version of the UltraFeedback dataset](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned).
23
 
24
+ OLMo 7B April 2024 Instruct and OLMo SFT are two adapted versions of these models trained for better question answering.
25
+ They are based on the OLMo 7B April release (previously called OLMo 1.7).
26
  They show the performance gain that OLMo base models can achieve with existing fine-tuning techniques.
27
 
28
  ## Model Details
 
30
  We release two adapted model versions:
31
  | Model | Training Method(s) | Datasets | Context Length |
32
  |------|--------|---------|--|
33
+ | [OLMo 7B April 2024 SFT](https://huggingface.co/allenai/OLMo-1.7-7B-SFT-hf) | SFT | [Tulu 2 SFT Mix](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture) | 2048 |
34
+ | [OLMo 7B April 2024 Instruct](https://huggingface.co/allenai/OLMo-1.7-7B-Instruct-hf) | SFT + DPO | [Tulu 2 SFT Mix](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture) + [Ultrafeedback Cleaned](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned) | 2048 |
35
 
36
+ These models are both trained on top of OLMo 7B April 2024 release (formerly called OLMo 1.7):
37
  | Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
38
  |------|--------|---------|-------------|-----------------|----------------|
39
+ | [OLMo 7B April 2024](https://huggingface.co/allenai/OLMo-1.7-7B-hf) | 2.05 Trillion |32 | 4096 | 32 | 4096 |
40
 
41
 
42
  ### Model Description
 
70
 
71
  ```python
72
  from transformers import AutoModelForCausalLM, AutoTokenizer
73
+ olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B-0424-Instruct-hf")
74
+ tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-7B-0424-Instruct-hf")
75
  chat = [
76
  { "role": "user", "content": "What is language modeling?" },
77
  ]
 
97
 
98
  | Model | MMLU 0-shot ↑ | AlpacaEval %win ↑ | ToxiGen % Toxic ↓ | TruthfulQA %Info+True ↑ |
99
  |-----------------------|---------------|--------------------|--------------------|-------------------------|
100
+ | **OLMo 7B April 2024 base** | 47.5 | - | 83.2 | 25.7 |
101
+ | **[OLMo 7B April 2024 SFT](https://huggingface.co/allenai/OLMo-1.7-7B-SFT-hf)** | 52.4 | 70.4 | 0.5 | 38.8 |
102
+ | **[OLMo 7B April 2024 Instruct](https://huggingface.co/allenai/OLMo-1.7-7B-Instruct-hf)** | 52.4 | 82.2 | 0.2 | 75.6 |
103
+
104
 
105
 
106
  ## Model Details