nroggendorff commited on
Commit
e93e4c9
·
verified ·
1 Parent(s): f67e0cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -45
README.md CHANGED
@@ -6,48 +6,4 @@ colorTo: gray
6
  header: mini
7
  sdk: static
8
  pinned: false
9
- ---
10
-
11
- ## Usage
12
-
13
- You can load models using the Hugging Face Transformers library:
14
-
15
- ```python
16
- from transformers import pipeline
17
-
18
- pipe = pipeline("text-generation", model="nroggendorff/mayo")
19
-
20
- question = "What color is the sky?"
21
- conv = [{"role": "user", "content": question}]
22
-
23
- response = pipe(conv, max_new_tokens=32)[0]['generated_text'][-1]['content']
24
- print(response)
25
- ```
26
-
27
- To use models with quantization:
28
-
29
- ```python
30
- from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
31
- import torch
32
-
33
- bnb_config = BitsAndBytesConfig(
34
- load_in_4bit=True,
35
- bnb_4bit_use_double_quant=True,
36
- bnb_4bit_quant_type="nf4",
37
- bnb_4bit_compute_dtype=torch.bfloat16
38
- )
39
-
40
- model_id = "nroggendorff/mayo"
41
-
42
- tokenizer = AutoTokenizer.from_pretrained(model_id)
43
- model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config)
44
-
45
- question = "What color is the sky?"
46
- prompt = tokenizer.apply_chat_template([{"role": "user", "content": question}], tokenize=False)
47
- inputs = tokenizer(prompt, return_tensors="pt")
48
-
49
- outputs = model.generate(**inputs, max_new_tokens=32)
50
-
51
- generated_text = tokenizer.batch_decode(outputs)[0]
52
- print(generated_text)
53
- ```
 
6
  header: mini
7
  sdk: static
8
  pinned: false
9
+ ---