legolasyiu commited on
Commit
f732da4
·
verified ·
1 Parent(s): b6754a9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -1
README.md CHANGED
@@ -9,7 +9,79 @@ tags:
9
  - unsloth
10
  - llama
11
  - trl
 
 
12
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
  # Uploaded model
15
 
@@ -19,4 +91,4 @@ tags:
19
 
20
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
9
  - unsloth
10
  - llama
11
  - trl
12
+ datasets:
13
+ - sahil2801/CodeAlpaca-20k
14
  ---
15
+ # Llama Agent 3B coder
16
+ Fine tuned with Agent dataset and also Code Alpaca 20K dataset for code agent instruct.
17
+
18
+
19
+ ## Original Model card
20
+ ## Model Information
21
+
22
+ The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
23
+
24
+ **Model Developer:** Meta
25
+
26
+ **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
27
+
28
+ | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff |
29
+ | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
30
+ | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
31
+ | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
32
+
33
+ **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
34
+
35
+ **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
36
+
37
+ **Model Release Date:** Sept 25, 2024
38
+
39
+ **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
40
+
41
+ **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
42
+
43
+ **Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
44
+
45
+ ## Intended Use
46
+
47
+ **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks.
48
+
49
+ **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.
50
+
51
+ ## How to use
52
+
53
+ This repository contains two versions of Llama-3.2-3B-Instruct, for use with `transformers` and with the original `llama` codebase.
54
+
55
+ ### Use with transformers
56
+
57
+ Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
58
+
59
+ Make sure to update your transformers installation via `pip install --upgrade transformers`.
60
+
61
+ ```python
62
+ import torch
63
+ from transformers import pipeline
64
+ model_id = "meta-llama/Llama-3.2-3B-Instruct"
65
+ pipe = pipeline(
66
+ "text-generation",
67
+ model=model_id,
68
+ torch_dtype=torch.bfloat16,
69
+ device_map="auto",
70
+ )
71
+ messages = [
72
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
73
+ {"role": "user", "content": "Who are you?"},
74
+ ]
75
+ outputs = pipe(
76
+ messages,
77
+ max_new_tokens=256,
78
+ )
79
+ print(outputs[0]["generated_text"][-1])
80
+ ```
81
+
82
+ Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
83
+
84
+
85
 
86
  # Uploaded model
87
 
 
91
 
92
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
93
 
94
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)