ayoubkirouane commited on
Commit
7a07883
1 Parent(s): 596068e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +77 -0
README.md CHANGED
@@ -1,6 +1,44 @@
1
  ---
2
  library_name: peft
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ## Training procedure
5
 
6
 
@@ -18,3 +56,42 @@ The following `bitsandbytes` quantization config was used during training:
18
 
19
 
20
  - PEFT 0.4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: peft
3
+ license: llama2
4
+ datasets:
5
+ - TuningAI/Cover_letter_v2
6
+ language:
7
+ - en
8
+ pipeline_tag: text-generation
9
  ---
10
+ ## Model Name: **Llama2_7B_Cover_letter_generator**
11
+ ## Description:
12
+ **Llama2_7B_Cover_letter_generator** is a powerful, custom language model that has been meticulously fine-tuned to excel at generating cover letters for various job positions.
13
+ It serves as an invaluable tool for automating the creation of personalized cover letters, tailored to specific job descriptions.
14
+ ## Base Model:
15
+ This model is based on the Meta's "meta-llama/Llama-2-7b-hf" architecture, making it a highly capable foundation for generating human-like text responses.
16
+
17
+ ## Dataset :
18
+ This model was fine-tuned on a custom dataset meticulously curated with more than 200 unique examples.
19
+ The dataset incorporates both manual entries and contributions from GPT3.5, GPT4, and Falcon 180B models.
20
+
21
+ ## Fine-tuning Techniques:
22
+ Fine-tuning was performed using QLoRA (Quantized LoRA), an extension of LoRA that introduces quantization for enhanced parameter efficiency.
23
+ The model benefits from 4-bit NormalFloat (NF4) quantization and Double Quantization techniques, ensuring optimized performance.
24
+
25
+ ## Use Cases:
26
+
27
+ * **Automating Cover Letter Creation:** Llama2_7B_Cover_letter_generator can be used to rapidly generate cover letters for a wide range of job openings, saving time and effort for job seekers.
28
+
29
+ ## Performance:
30
+
31
+ * Llama2_7B_Cover_letter_generator exhibits impressive performance in generating context-aware cover letters with high coherence and relevance to job descriptions.
32
+ * It maintains a low perplexity score, indicating its ability to generate text that aligns well with user input and desired contexts.
33
+ * The model's quantization techniques enhance its efficiency without significantly compromising performance.
34
+
35
+ ## Limitations:
36
+
37
+ While the model excels in generating cover letters, it may occasionally produce text that requires minor post-processing for perfection.
38
+ + It may not fully capture highly specific or niche job requirements, and some manual customization might be necessary for certain applications.
39
+ + Llama2_7B_Cover_letter_generator's performance may vary depending on the complexity and uniqueness of the input prompts.
40
+ + Users should be mindful of potential biases in the generated content and perform appropriate reviews to ensure inclusivity and fairness.
41
+
42
  ## Training procedure
43
 
44
 
 
56
 
57
 
58
  - PEFT 0.4.0
59
+
60
+
61
+ ## How to Get Started with the Model
62
+
63
+ ```
64
+ ! huggingface-cli login
65
+ ```
66
+
67
+ ```python
68
+ from transformers import pipeline
69
+ from transformers import AutoTokenizer
70
+ from peft import PeftModel, PeftConfig
71
+ from transformers import AutoModelForCausalLM , BitsAndBytesConfig
72
+ import torch
73
+
74
+ #config = PeftConfig.from_pretrained("ayoubkirouane/Llama2_13B_startup_hf")
75
+ bnb_config = BitsAndBytesConfig(
76
+ load_in_4bit=True,
77
+ bnb_4bit_quant_type="nf4",
78
+ bnb_4bit_compute_dtype=getattr(torch, "float16"),
79
+ bnb_4bit_use_double_quant=False)
80
+ model = AutoModelForCausalLM.from_pretrained(
81
+ "meta-llama/Llama-2-7b-hf",
82
+ quantization_config=bnb_config,
83
+ device_map={"": 0})
84
+ model.config.use_cache = False
85
+ model.config.pretraining_tp = 1
86
+ model = PeftModel.from_pretrained(model, "TuningAI/Llama2_7B_Cover_letter_generator")
87
+ tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf" , trust_remote_code=True)
88
+ tokenizer.pad_token = tokenizer.eos_token
89
+ tokenizer.padding_side = "right"
90
+ while 1:
91
+ input_text = input(">>>")
92
+ logging.set_verbosity(logging.CRITICAL)
93
+ prompt = f"### Instruction\n{system_message}.\n ###Input \n\n{input_text}. ### Output:"
94
+ pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer,max_length=512)
95
+ result = pipe(prompt)
96
+ print(result[0]['generated_text'].replace(prompt, ''))
97
+ ```