amezasor commited on
Commit
73381e5
1 Parent(s): 342f92f

update after review

Browse files
Files changed (1) hide show
  1. README.md +34 -38
README.md CHANGED
@@ -2,9 +2,6 @@
2
  pipeline_tag: text-generation
3
  inference: false
4
  license: apache-2.0
5
- # datasets:
6
- # metrics:
7
- # - code_eval
8
  library_name: transformers
9
  tags:
10
  - language
@@ -203,40 +200,40 @@ model-index:
203
  value:
204
  veriefied: false
205
  ---
 
206
  <!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd5057674cdb524450093d/1hzxoPwqkBJXshKVVe6_9.png) -->
 
207
 
208
  # Granite-3.0-2B-Instruct
209
 
210
- ## Model Summary
211
- **Granite-3.0-2B-Instruct** is a lightweight and open-source 2B parameter model fine tuned from *Granite-3.0-2B-Base* on a combination of open-source and proprietary instruction data with a **permissively licensed**. This language model is designed to excel in instruction following tasks such as summarization, problem-solving, text translation, reasoning, code tasks, funcion-calling, and more.
212
- <!-- The lightweight and open-source nature of this model makes it an excellent choice to serve as backbone of real-time applications such as chatbots and conversational agents. -->
213
 
214
  - **Developers:** IBM Research
215
- - **GitHub Repository:** [ibm-granite/granite-language-models](https://github.com/ibm-granite/granite-language-models)
216
  - **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
217
- - **Paper:** [Granite Language Models](https://) <!-- TO DO: Update github repo link when it is ready -->
218
  - **Release Date**: October 21st, 2024
219
- - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0).
220
 
221
- ## Supported Languages
222
- English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, Chinese (Simplified)
223
 
224
- ## Usage
225
- ### Intended use
226
  The model is designed to respond to general instructions and can be used to build AI assistants for multiple domains, including bussiness applications.
227
 
228
- ### Capabilities
229
  * Summarization
230
  * Text classification
231
  * Text extraction
232
  * Question-answering
233
  * Retrieval Augmented Generation (RAG)
234
- * Code related
235
- * Function-calling
236
  * Multilingual dialog use cases
237
 
238
- ### Generation
239
- This is a simple example of how to use **Granite-3.0-2B-Instruct** model.
240
 
241
  Install the following libraries:
242
 
@@ -273,11 +270,8 @@ output = tokenizer.batch_decode(output)
273
  print(output)
274
  ```
275
 
276
- <!-- TO DO: function-calling-example
277
- -->
278
-
279
- ## Model Architeture
280
- **Granite-3.0-2B-Instruct** is based on a decoder-only dense transformer architecture. Core components of this architecture are: GQA and RoPE, MLP with SwiGLU, RMSNorm, and shared input/output embbeddings.
281
 
282
  | Model | 2B Dense | 8B Dense | 1B MoE | 3B MoE |
283
  | :-------- | :-------- | :--------| :--------| :--------|
@@ -297,21 +291,23 @@ print(output)
297
  | # Active Parameters | **2.5B** | 8.1B | 400M | 800M |
298
  | # Training tokens | **12T** | 12T | 10T | 10T |
299
 
300
- <!-- TO DO: To be completed once the paper is ready, we may changed title to Supervised Finetuning -->
301
- ## Training Data
302
- Granite Language Instruct models are trained on a collection of publicly available datasets with non-restrictive license, as well as an IBM collection of synthetic datasets. We annotated and filtered these datasets to only include high-quality instances from each of them in our final mixture. This dataset selection is representative of the following domains:
303
 
304
- * English datasets: [Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus), [WebInstructSub](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub), [OASST-OctoPack](https://huggingface.co/datasets/bigcode/oasst-octopack), [Daring-Anteater](https://huggingface.co/datasets/nvidia/Daring-Anteater), [SoftAge-Multiturn](https://huggingface.co/datasets/SoftAge-AI/multi-turn_dataset), [Glaive-RAG-v1 ](https://huggingface.co/datasets/glaiveai/RAG-v1 ), [EvolKit-20k](https://huggingface.co/datasets/arcee-ai/EvolKit-20k ), [Magpie-Phi3-Pro-300K-Filtered](https://huggingface.co/datasets/Magpie-Align/Magpie-Phi3-Pro-300K-Filtered).
305
- * Multilingual datasets: [Aya Dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset) and IBM Synthetic datasets (e.g., Blue Multilingual, Daring Anteater Translated).
306
- * Code datasets: [Glaive Code Assistant V3](https://huggingface.co/datasets/glaiveai/glaive-code-assistant-v3), [SQL Create Context Instruction](https://huggingface.co/datasets/bugdaryan/sql-create-context-instruction), and [Self-OSS-Instruct-SC2](https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k). Single and multi-turn IBM synthetic datasets, including a set of datasets generated via the evol-instruct method.
307
- * Math: [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA), [StackMathQA](https://huggingface.co/datasets/math-ai/StackMathQA ), and [MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
308
- * Tools: [xlam-function-calling](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k), [Glaive Function Calling V2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2), [Hermes Function Calling V1](https://huggingface.co/datasets/NousResearch/hermes-function-calling-v1), and IBM Synthetic API data.
309
- * Safety: [SimpleSafetyTests](https://huggingface.co/datasets/Bertievidgen/SimpleSafetyTests), [HarmBench Behaviors](https://github.com/centerforaisafety/HarmBench/blob/main/data/behavior_datasets/harmbench_behaviors_text_all.csv), [Strong Reject](https://github.com/alexandrasouly/strongreject/blob/main/strongreject_dataset/strongreject_dataset.csv), [AdvBench](https://huggingface.co/datasets/walledai/AdvBench), [MistralGuard](https://huggingface.co/datasets/natolambert/xstest-v2-copy), [Do-Not-Answer](https://huggingface.co/datasets/LibrAI/do-not-answer), and IBM Synthetic data for safety.
310
 
311
- <!-- CHECK: removed Vela, only talk about blue-vela-->
312
- ## Infrastructure
313
- We train the Granite Language models using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.
314
 
315
- <!-- TO DO: Check multilingual statement once the paper is ready -->
316
- ## Ethical Considerations and Limitations
317
- Granite instruct models are primarily finetuned using instruction-response pairs mostly in English, but also in German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese (Simplified). As this model has been exposed to multilingual data, it can handle multilingual dialog use cases with a limited performance in non-English tasks. In such case, introducing a small number of examples (few-shot) can help the model in generating more accurate outputs. The model also inherits ethical considerations and limitations from its base model. For more information, please refer to *[Granite-3.0-2B-Base](https://huggingface.co/ibm-granite/granite-3.0-2b-base)* model card.
 
 
 
 
 
 
 
 
 
2
  pipeline_tag: text-generation
3
  inference: false
4
  license: apache-2.0
 
 
 
5
  library_name: transformers
6
  tags:
7
  - language
 
200
  value:
201
  veriefied: false
202
  ---
203
+
204
  <!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd5057674cdb524450093d/1hzxoPwqkBJXshKVVe6_9.png) -->
205
+ ![image/png](granite-3_0-language-models_Group_1.png)
206
 
207
  # Granite-3.0-2B-Instruct
208
 
209
+ **Model Summary:**
210
+ Granite-3.0-2B-Instruct is a 2B parameter model finetuned from *Granite-3.0-2B-Base* using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets. This model is developed using a diverse set of techniques with a structured chat format, including supervised finetuning, model alignment using reinforcement learning, and model merging.
 
211
 
212
  - **Developers:** IBM Research
213
+ - **GitHub Repository:** [ibm-granite/granite-3.0-language-models](https://github.com/ibm-granite/granite-3.0-language-models)
214
  - **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
215
+ - **Paper:** [Granite 3.0 Language Models](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/granite-3-language-models.pdf)
216
  - **Release Date**: October 21st, 2024
217
+ - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
218
 
219
+ **Supported Languages:**
220
+ English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may fintune Granite 3.0 models for languages beyond these 12 languages.
221
 
222
+ **Intended use:**
 
223
  The model is designed to respond to general instructions and can be used to build AI assistants for multiple domains, including bussiness applications.
224
 
225
+ *Capabilities*
226
  * Summarization
227
  * Text classification
228
  * Text extraction
229
  * Question-answering
230
  * Retrieval Augmented Generation (RAG)
231
+ * Code related tasks
232
+ * Function-calling tasks
233
  * Multilingual dialog use cases
234
 
235
+ **Generation:**
236
+ This is a simple example of how to use Granite-3.0-2B-Instruct model.
237
 
238
  Install the following libraries:
239
 
 
270
  print(output)
271
  ```
272
 
273
+ **Model Architeture:**
274
+ Granite-3.0-2B-Instruct is based on a decoder-only dense transformer architecture. Core components of this architecture are: GQA and RoPE, MLP with SwiGLU, RMSNorm, and shared input/output embbeddings.
 
 
 
275
 
276
  | Model | 2B Dense | 8B Dense | 1B MoE | 3B MoE |
277
  | :-------- | :-------- | :--------| :--------| :--------|
 
291
  | # Active Parameters | **2.5B** | 8.1B | 400M | 800M |
292
  | # Training tokens | **12T** | 12T | 10T | 10T |
293
 
294
+ **Training Data:**
295
+ Overall, our SFT data is largely comprised of three key sources: (1) publicly available datasets with permissive license, (2) internal synthetic data targeting specific capabilities, and (3) very small amounts of human-curated data. Please refer to [Granite 3.0 Language Models technical report](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/granite-3-language-models.pdf) for more details on the individual categories and datasets.
 
296
 
297
+ **Infrastructure:**
298
+ We train Granite 3.0 Language Models using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.
 
 
 
 
299
 
300
+ **Ethical Considerations and Limitations:**
301
+ Granite 3.0 Instruct Models are primarily finetuned using instruction-response pairs mostly in English, but also multilingual data covering eleven languages. Although this model can handle multilingual dialog use cases, its performance might not be similar to English tasks. In such case, introducing a small number of examples (few-shot) can help the model in generating more accurate outputs. While this model has been aligned by keeping safety in consideration, the model may in some cases produce inaccurate, biased, or unsafe responses to user prompts. So we urge the community to use this model with proper safety testing and tuning tailored for their specific tasks.
 
302
 
303
+ <!-- ## Citation
304
+ ```
305
+ @misc{granite-models,
306
+ author = {author 1, author2, ...},
307
+ title = {},
308
+ journal = {},
309
+ volume = {},
310
+ year = {2024},
311
+ url = {https://arxiv.org/abs/0000.00000},
312
+ }
313
+ ``` -->