RudranshAgnihotri
commited on
Commit
·
38d686d
1
Parent(s):
5df633a
Update README.md
Browse files
README.md
CHANGED
@@ -16,76 +16,39 @@ The adapter has been trained using the Amazon Sentiment Review dataset, which in
|
|
16 |
|
17 |
The Amazon Sentiment Review dataset was chosen for its size and its realistic representation of customer feedback. It serves as an excellent basis for training models to perform sentiment analysis in real-world scenarios.
|
18 |
|
19 |
-
model-index:
|
20 |
-
- name: LLAMA 7B Sentiment Analysis Adapter
|
21 |
-
results:
|
22 |
-
- task:
|
23 |
-
name: Sentiment Analysis
|
24 |
-
type: text-classification
|
25 |
-
dataset:
|
26 |
-
name: Amazon Sentiment Review dataset
|
27 |
-
type: amazon_reviews
|
28 |
-
model-metadata:
|
29 |
-
license: apache-2.0
|
30 |
-
library_name: transformers
|
31 |
-
tags: ["text-classification", "sentiment-analysis", "English"]
|
32 |
-
languages: ["en"]
|
33 |
-
widget:
|
34 |
-
- text: "I love using FuturixAI for my daily tasks!"
|
35 |
-
|
36 |
-
intended-use:
|
37 |
-
primary-uses:
|
38 |
-
- This model is intended for sentiment analysis on English language text.
|
39 |
-
primary-users:
|
40 |
-
- Researchers
|
41 |
-
- Social media monitoring tools
|
42 |
-
- Customer feedback analysis systems
|
43 |
-
|
44 |
-
training-data:
|
45 |
-
training-data-source: Amazon Sentiment Review dataset
|
46 |
-
|
47 |
-
quantitative-analyses:
|
48 |
-
use-cases-limitations:
|
49 |
-
- The model may perform poorly on texts that contain a lot of slang or are in a different language than it was trained on.
|
50 |
-
|
51 |
-
ethical-considerations:
|
52 |
-
risks-and-mitigations:
|
53 |
-
- There is a risk of the model reinforcing or creating biases based on the training data. Users should be aware of this and consider additional bias mitigation strategies when using the model.
|
54 |
-
|
55 |
-
model-architecture:
|
56 |
-
architecture: LLAMA 7B with LORA adaptation
|
57 |
-
library: PeftModel
|
58 |
-
|
59 |
-
how-to-use:
|
60 |
-
installation:
|
61 |
-
- pip install transformers peft
|
62 |
-
code-examples:
|
63 |
-
- |
|
64 |
-
```python
|
65 |
-
import transformers
|
66 |
-
from peft import PeftModel
|
67 |
-
model_name = "meta-llama/Llama-2-7b" # you can use VICUNA 7B model as well
|
68 |
-
peft_model_id = "Futurix-AI/LLAMA_7B_Sentiment_Analysis_Amazon_Review_Dataset"
|
69 |
-
|
70 |
-
tokenizer_t5 = transformers.AutoTokenizer.from_pretrained(model_name)
|
71 |
-
model_t5 = transformers.AutoModelForCausalLM.from_pretrained(model_name)
|
72 |
-
model_t5 = PeftModel.from_pretrained(model_t5, peft_model_id)
|
73 |
-
|
74 |
-
prompt = """
|
75 |
-
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
|
76 |
-
###Instruction:
|
77 |
-
Detect the sentiment of the tweet.
|
78 |
-
###Input:
|
79 |
-
FuturixAI embodies the spirit of innovation, with a resolve to push the boundaries of what's possible through science and technology.
|
80 |
-
###Response:
|
81 |
-
"""
|
82 |
-
|
83 |
-
inputs = tokenizer_t5(prompt, return_tensors="pt")
|
84 |
-
for k, v in inputs.items():
|
85 |
-
inputs[k] = v
|
86 |
-
outputs = model_t5.generate(**inputs, max_length=256, do_sample=True)
|
87 |
-
text = tokenizer_t5.batch_decode(outputs, skip_special_tokens=True)[0]
|
88 |
-
print(text)
|
89 |
-
```
|
90 |
-
|
91 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
|
17 |
The Amazon Sentiment Review dataset was chosen for its size and its realistic representation of customer feedback. It serves as an excellent basis for training models to perform sentiment analysis in real-world scenarios.
|
18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
|
20 |
+
```python
|
21 |
+
import transformers
|
22 |
+
from peft import PeftModel
|
23 |
+
|
24 |
+
# Model and tokenizer names
|
25 |
+
model_name = "lmsys/vicuna-7b-v1.5"
|
26 |
+
peft_model_id = "rudransh2004/FuturixAI-AmazonSentiment-LLAMA7B-LORA"
|
27 |
+
|
28 |
+
# Initialize the tokenizer and model
|
29 |
+
tokenizer_t5 = transformers.AutoTokenizer.from_pretrained(model_name)
|
30 |
+
model_t5 = transformers.AutoModelForCausalLM.from_pretrained(model_name)
|
31 |
+
model_t5 = PeftModel.from_pretrained(model_t5, peft_model_id)
|
32 |
+
|
33 |
+
# Prompt for sentiment detection
|
34 |
+
prompt = """
|
35 |
+
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
|
36 |
+
###Instruction:
|
37 |
+
Detect the sentiment of the tweet.
|
38 |
+
###Input:
|
39 |
+
FuturixAI embodies the spirit of innovation, with a resolve to push the boundaries of what's possible through science and technology.
|
40 |
+
###Response:
|
41 |
+
|
42 |
+
"""
|
43 |
+
|
44 |
+
# Tokenize the prompt and prepare inputs
|
45 |
+
inputs = tokenizer_t5(prompt, return_tensors="pt")
|
46 |
+
for k, v in inputs.items():
|
47 |
+
inputs[k] = v
|
48 |
+
|
49 |
+
# Generate a response using the model
|
50 |
+
outputs = model_t5.generate(**inputs, max_length=256, do_sample=True)
|
51 |
+
|
52 |
+
# Decode and print the response
|
53 |
+
text = tokenizer_t5.batch_decode(outputs, skip_special_tokens=True)[0]
|
54 |
+
print(text)
|