DanielHesslow commited on
Commit
6cbe3ea
·
1 Parent(s): 314beba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -4
README.md CHANGED
@@ -20,14 +20,13 @@ Model | #Params | d_model | layers | lm loss uniref-100
20
  [XLarge](https://huggingface.co/lightonai/RITA_xl)| 1.2B | 2048 | 24 | 1.70
21
 
22
 
23
-
24
  # Usage
25
 
26
  Instantiate a model like so:
27
 
28
  from transformers import AutoModel, AutoModelForCausalLM
29
- model = AutoModelForCausalLM.from_pretrained("Seledorn/RITA_l, trust_remote_code=True")
30
- tokenizer = AutoTokenizer.from_pretrained("Seledorn/RITA_l")
31
 
32
  for generation use we support pipelines:
33
 
@@ -36,4 +35,3 @@ for generation use we support pipelines:
36
  sequences = rita_gen("MAB", max_length=20, do_sample=True, top_k=950, repetition_penalty=1.2, num_return_sequences=2, eos_token_id=2)
37
  for seq in sequences:
38
  print(f"seq: {seq['generated_text'].replace(' ', '')}")
39
-
 
20
  [XLarge](https://huggingface.co/lightonai/RITA_xl)| 1.2B | 2048 | 24 | 1.70
21
 
22
 
 
23
  # Usage
24
 
25
  Instantiate a model like so:
26
 
27
  from transformers import AutoModel, AutoModelForCausalLM
28
+ model = AutoModelForCausalLM.from_pretrained("lightonai/RITA_l, trust_remote_code=True")
29
+ tokenizer = AutoTokenizer.from_pretrained("lightonai/RITA_l")
30
 
31
  for generation use we support pipelines:
32
 
 
35
  sequences = rita_gen("MAB", max_length=20, do_sample=True, top_k=950, repetition_penalty=1.2, num_return_sequences=2, eos_token_id=2)
36
  for seq in sequences:
37
  print(f"seq: {seq['generated_text'].replace(' ', '')}")