shimmyshimmer commited on
Commit
1121f74
1 Parent(s): 79b8130

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -16
README.md CHANGED
@@ -18,7 +18,7 @@ We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://c
18
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
19
 
20
  # unsloth/SmolLM2-1.7B-Instruct-GGUF
21
- For more details on the model, please go to Meta's original [model card](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct)
22
 
23
  ## ✨ Finetune for Free
24
 
@@ -39,24 +39,16 @@ All notebooks are **beginner friendly**! Add your dataset, click "Run All", and
39
  - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
40
 
41
  ## Special Thanks
42
- A huge thank you to the Meta and Llama team for creating and releasing these models.
43
 
44
- ## Model Information
45
 
46
- The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
47
 
48
- **Model developer**: Meta
49
 
50
- **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
51
 
52
- **Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
53
 
54
- **Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
55
-
56
- **Model Release Date:** Sept 25, 2024
57
-
58
- **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
59
-
60
- **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
61
-
62
- Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
 
18
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
19
 
20
  # unsloth/SmolLM2-1.7B-Instruct-GGUF
21
+ For more details on the model, please go to Hugging Face's original [model card](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct)
22
 
23
  ## ✨ Finetune for Free
24
 
 
39
  - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
40
 
41
  ## Special Thanks
42
+ A huge thank you to the Hugging Face team for creating and releasing these models.
43
 
44
+ ## Model Summary
45
 
46
+ SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
47
 
48
+ The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
49
 
50
+ The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
51
 
52
+ # SmolLM2
53
 
54
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/y45hIMNREW7w_XpHYB_0q.png)