Text Generation
Transformers
PyTorch
RefinedWebModel
custom_code
text-generation-inference
lifeofcoding commited on
Commit
828dab8
·
1 Parent(s): cf12c59

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -4
README.md CHANGED
@@ -6,15 +6,39 @@ datasets:
6
  ---
7
  # Mastermax 7B
8
 
9
- A open source large langauge model for chat
10
 
11
  ## Model Details
12
 
13
  This was based on Falcon 7B and trained on additional new datasets.
14
 
15
- ### Model Description
16
-
17
- <!-- Provide a longer summary of what this model is. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
 
20
 
 
6
  ---
7
  # Mastermax 7B
8
 
9
+ A open source large language model based on Falcon-7b
10
 
11
  ## Model Details
12
 
13
  This was based on Falcon 7B and trained on additional new datasets.
14
 
15
+ ### Use Model
16
+
17
+ ```python
18
+ from transformers import AutoTokenizer, AutoModelForCausalLM
19
+ import transformers
20
+ import torch
21
+ model = "lifeofcoding/mastermax-7b"
22
+ tokenizer = AutoTokenizer.from_pretrained(model)
23
+ pipeline = transformers.pipeline(
24
+ "text-generation",
25
+ model=model,
26
+ tokenizer=tokenizer,
27
+ torch_dtype=torch.bfloat16,
28
+ trust_remote_code=True,
29
+ device_map="auto",
30
+ )
31
+ sequences = pipeline(
32
+ "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
33
+ max_length=200,
34
+ do_sample=True,
35
+ top_k=10,
36
+ num_return_sequences=1,
37
+ eos_token_id=tokenizer.eos_token_id,
38
+ )
39
+ for seq in sequences:
40
+ print(f"Result: {seq['generated_text']}")
41
+ ```
42
 
43
 
44