rwmasood commited on
Commit
e02397e
·
verified ·
1 Parent(s): 4761c50

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -20,7 +20,7 @@ base_model:
20
  * **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
21
  * **License**: This model is under a **Non-commercial** Bespoke License and governed by the Meta license. You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform), but have either lost your copy of the weights or encountered issues converting them to the Transformers format
22
  * **Where to send comments**: Instructions on how to provide feedback or comments on a model can be found by opening an issue in the [Hugging Face community's model repository](https://huggingface.co/upstage/llama-30b-instruct-2048/discussions)
23
- * **Contact**: For questions and comments about the model, please email [contact@upstage.ai](mailto:[email protected])
24
 
25
  ## Training
26
 
@@ -48,12 +48,12 @@ model = AutoModelForCausalLM.from_pretrained(
48
  torch_dtype=torch.float16
49
  )
50
 
51
- prompt = "### User:\nThomas is healthy, but he has to go to the hospital. What could be the reasons?\n\n### Assistant:\n"
52
  inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
53
  del inputs["token_type_ids"]
54
  streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
55
 
56
- output = model.generate(**inputs, streamer=streamer, use_cache=True, max_new_tokens=float('inf'))
57
  output_text = tokenizer.decode(output[0], skip_special_tokens=True)
58
  ```
59
 
 
20
  * **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
21
  * **License**: This model is under a **Non-commercial** Bespoke License and governed by the Meta license. You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform), but have either lost your copy of the weights or encountered issues converting them to the Transformers format
22
  * **Where to send comments**: Instructions on how to provide feedback or comments on a model can be found by opening an issue in the [Hugging Face community's model repository](https://huggingface.co/upstage/llama-30b-instruct-2048/discussions)
23
+ * **Contact**: For questions and comments about the model, please email [contact@empirischtech.at](mailto:[email protected])
24
 
25
  ## Training
26
 
 
48
  torch_dtype=torch.float16
49
  )
50
 
51
+ prompt = "### User:\nEmma feels perfectly fine, yet she still has an appointment at the hospital. What might be the reasons?\n\n### Assistant:\n"
52
  inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
53
  del inputs["token_type_ids"]
54
  streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
55
 
56
+ output = model.generate(**inputs, streamer=streamer, use_cache=True, max_new_tokens=1024)
57
  output_text = tokenizer.decode(output[0], skip_special_tokens=True)
58
  ```
59