milk639 commited on
Commit
820ef51
·
verified ·
1 Parent(s): 4bdada0

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -0
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: CohereForAI/c4ai-command-r7b-12-2024
3
+ language:
4
+ - en
5
+ - fr
6
+ - de
7
+ - es
8
+ - it
9
+ - pt
10
+ - ja
11
+ - ko
12
+ - zh
13
+ - ar
14
+ - el
15
+ - fa
16
+ - pl
17
+ - id
18
+ - cs
19
+ - he
20
+ - hi
21
+ - nl
22
+ - ro
23
+ - ru
24
+ - tr
25
+ - uk
26
+ - vi
27
+ library_name: transformers
28
+ license: cc-by-nc-4.0
29
+ tags:
30
+ - mlx
31
+ inference: false
32
+ extra_gated_prompt: By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license) and
33
+ acknowledge that the information you provide will be collected, used, and shared
34
+ in accordance with Cohere’s [Privacy Policy]( https://cohere.com/privacy). You’ll
35
+ receive email updates about C4AI and Cohere research, events, products and services.
36
+ You can unsubscribe at any time.
37
+ extra_gated_fields:
38
+ Name: text
39
+ Affiliation: text
40
+ Country: country
41
+ I agree to use this model for non-commercial use ONLY: checkbox
42
+ ---
43
+
44
+ # mlx-community/c4ai-command-r7b-12-2024-4bit
45
+
46
+ The Model [mlx-community/c4ai-command-r7b-12-2024-4bit](https://huggingface.co/mlx-community/c4ai-command-r7b-12-2024-4bit) was
47
+ converted to MLX format from [CohereForAI/c4ai-command-r7b-12-2024](https://huggingface.co/CohereForAI/c4ai-command-r7b-12-2024)
48
+ using mlx-lm version **0.20.4**.
49
+
50
+ ## Use with mlx
51
+
52
+ ```bash
53
+ pip install mlx-lm
54
+ ```
55
+
56
+ ```python
57
+ from mlx_lm import load, generate
58
+
59
+ model, tokenizer = load("mlx-community/c4ai-command-r7b-12-2024-4bit")
60
+
61
+ prompt="hello"
62
+
63
+ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
64
+ messages = [{"role": "user", "content": prompt}]
65
+ prompt = tokenizer.apply_chat_template(
66
+ messages, tokenize=False, add_generation_prompt=True
67
+ )
68
+
69
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
70
+ ```