NoaiGPT commited on
Commit
94c68d3
1 Parent(s): 8e120b9
Files changed (1) hide show
  1. README.md +3 -12
README.md CHANGED
@@ -34,7 +34,7 @@ widget:
34
 
35
  This repository contains a fine-tuned text-rewriting model based on the T5-Base with 223M parameters.
36
 
37
- Developed by: https://exnrt.com
38
 
39
  ## Key Features:
40
 
@@ -56,8 +56,8 @@ T5 model expects a task related prefix: since it is a paraphrasing task, we will
56
  from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
57
 
58
  device = "cuda"
59
- tokenizer = AutoTokenizer.from_pretrained("Ateeqq/Text-Rewriter-Paraphraser", token='your_token')
60
- model = AutoModelForSeq2SeqLM.from_pretrained("Ateeqq/Text-Rewriter-Paraphraser", token='your_token').to(device)
61
 
62
  def generate_title(text):
63
  input_ids = tokenizer(f'paraphraser: {text}', return_tensors="pt", padding="longest", truncation=True, max_length=64).input_ids.to(device)
@@ -85,12 +85,3 @@ generate_title(text)
85
  'Using transfer learning to use prior model training, fine-tuning can reduce the amount of expensive computing power and labeled data required for large models that are suitable in niche usage cases or businesses.']
86
  ```
87
 
88
- **Disclaimer:**
89
-
90
- * Limited Use: It grants a non-exclusive, non-transferable license to use the this model same as Llama-3. This means you can't freely share it with others or sell the model itself.
91
- * Commercial Use Allowed: You can use the model for commercial purposes, but under the terms of the license agreement.
92
- * Attribution Required: You need to abide by the agreement's terms regarding attribution. It is essential to use the paraphrased text responsibly and ethically, with proper attribution of the original source.
93
-
94
- **Further Development:**
95
-
96
- (Mention any ongoing development or areas for future improvement in Discussions.)
 
34
 
35
  This repository contains a fine-tuned text-rewriting model based on the T5-Base with 223M parameters.
36
 
37
+
38
 
39
  ## Key Features:
40
 
 
56
  from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
57
 
58
  device = "cuda"
59
+ tokenizer = AutoTokenizer.from_pretrained("NoaiGPT/777", token='your_token')
60
+ model = AutoModelForSeq2SeqLM.from_pretrained("NoaiGPT/777", token='your_token').to(device)
61
 
62
  def generate_title(text):
63
  input_ids = tokenizer(f'paraphraser: {text}', return_tensors="pt", padding="longest", truncation=True, max_length=64).input_ids.to(device)
 
85
  'Using transfer learning to use prior model training, fine-tuning can reduce the amount of expensive computing power and labeled data required for large models that are suitable in niche usage cases or businesses.']
86
  ```
87