PeterBrendan commited on
Commit
12ae2ec
·
1 Parent(s): 8f6fa66

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -1
README.md CHANGED
@@ -4,4 +4,84 @@ language:
4
  - en
5
  tags:
6
  - Advertising
7
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - en
5
  tags:
6
  - Advertising
7
+ ---
8
+
9
+ ## llama-2-7b-Ads
10
+
11
+ ### Model Overview
12
+
13
+ The "PeterBrendan/llama-2-7b-Ads" model is a fine-tuned version of the "meta-llama/Llama-2-7b-chat-hf" language model. The base model, "meta-llama/Llama-2-7b-chat-hf," was trained on a vast corpus of text, enabling it to generate coherent and contextually relevant responses for various chat-based applications. The "PeterBrendan/llama-2-7b-Ads" model was fine-tuned using the "PeterBrendan/Ads_Creative_Ad_Copy_Programmatic" dataset.
14
+
15
+ ### Dataset Overview
16
+
17
+ The "PeterBrendan/Ads_Creative_Ad_Copy_Programmatic" dataset used for fine-tuning contains 7097 samples of online programmatic ad creatives, along with their respective ad sizes. The dataset includes 8 unique ad sizes, namely:
18
+
19
+ 1. (300, 250)
20
+ 2. (728, 90)
21
+ 3. (970, 250)
22
+ 4. (300, 600)
23
+ 5. (160, 600)
24
+ 6. (970, 90)
25
+ 7. (336, 280)
26
+ 8. (320, 50)
27
+
28
+ This dataset is a random sample from Project300x250.com's complete creative data set. The primary application of this dataset is to train and evaluate natural language processing models specifically for advertising creatives.
29
+
30
+ ### Use Cases
31
+
32
+ The "PeterBrendan/llama-2-7b-Ads" model can be used in various natural language processing tasks related to advertising creatives. Some potential use cases include:
33
+
34
+ 1. **Ad Creative Generation**: The model can generate ad copy text given different prompts, enabling advertisers to create compelling ad creatives automatically.
35
+
36
+ 2. **Personalization**: By inputting user-specific data into a prompt, such as demographics or preferences, the model can generate personalized ad copy tailored to different ad sizes for targeted advertising.
37
+
38
+ #### Example Prompts:
39
+ **Example Prompt 1:**
40
+ Write me an online ad for Old Spice for a 300x250 creative
41
+ **Output:**
42
+ OLD SPICE
43
+ The Smell of a Man
44
+ GET YOUR SMELL ON
45
+ SHOP NOW
46
+
47
+ **Example Prompt 2:**
48
+ Write me an online ad for Nike Basketball Shoes for a 300x250 creative
49
+ **Output:**
50
+ Nike
51
+ Basketball Shoes
52
+ Shop Now
53
+ Nike
54
+
55
+ ### Performance and Limitations
56
+
57
+ As this model is a fine-tuned version of "meta-llama/Llama-2-7b-chat-hf," it inherits its base model's performance characteristics and limitations. The quality of generated responses depends on the complexity and diversity of the input data during fine-tuning.
58
+
59
+ **Performance:** The model generally performs well in generating coherent ad copy text based on the input ad sizes. However, the actual performance may vary depending on the complexity and creativity required for the given task.
60
+
61
+ **Limitations:**
62
+
63
+ 1. **Domain-Specific Bias**: The model's responses might be biased towards the content found in the "PeterBrendan/Ads_Creative_Ad_Copy_Programmatic" dataset, which primarily focuses on advertising creatives.
64
+
65
+ 2. **Out-of-Domain Queries**: The model may not perform optimally when faced with queries or inputs unrelated to advertising creatives or the specified ad sizes.
66
+
67
+ 3. **Limited Generalization**: Although fine-tuned, the model's generalization capabilities are still bounded by the data it was trained on. Extreme or out-of-distribution inputs may lead to inaccurate or nonsensical outputs.
68
+
69
+ ### How to Use
70
+
71
+ ~~~python
72
+ # Use a pipeline as a high-level helper
73
+ from transformers import pipeline
74
+ pipe = pipeline("text-generation", model="PeterBrendan/llama-2-7b-Ads")
75
+
76
+ # Load model directly
77
+ from transformers import AutoTokenizer, AutoModelForCausalLM
78
+
79
+ tokenizer = AutoTokenizer.from_pretrained("PeterBrendan/llama-2-7b-Ads")
80
+ model = AutoModelForCausalLM.from_pretrained("PeterBrendan/llama-2-7b-Ads")
81
+ ~~~
82
+
83
+ ### Acknowledgments
84
+ The "PeterBrendan/llama-2-7b-Ads" model was fine-tuned using the Hugging Face Transformers library and relies on the "meta-llama/Llama-2-7b-chat-hf" base model. We extend our gratitude to the creators of the base model for their contributions.
85
+
86
+ ### Disclaimer
87
+ The "PeterBrendan/llama-2-7b-Ads" model card provides an overview of the model and its use cases. However, it is essential to exercise caution and human review when deploying any AI model for critical applications like advertising. As with any AI system, the model's outputs should be thoroughly analyzed, especially in real-world scenarios, to ensure alignment with business objectives and ethical considerations.