puremood commited on
Commit
0397fd1
1 Parent(s): e984492

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -1,5 +1,6 @@
1
  ---
2
- base_model: togethercomputer/Llama-3-70b-chat-hf
 
3
  library_name: peft
4
  ---
5
 
@@ -9,7 +10,7 @@ MARTZAI is a LoRA fine-tuned adapter for **LLaMA 70B**, trained on Chris Martz's
9
 
10
  ## Model Details
11
 
12
- - **Base model:** [togethercomputer/Llama-3-70b-chat-hf](https://huggingface.co/togethercomputer/Llama-3-70b-chat-hf)
13
  - **Method:** LoRA (Low-Rank Adaptation)
14
  - **Framework:** PEFT
15
  - **Language:** English
@@ -22,13 +23,13 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
22
  from peft import PeftModel
23
 
24
  # Load base model
25
- base_model = AutoModelForCausalLM.from_pretrained("togethercomputer/Llama-3-70b-chat-hf")
26
 
27
  # Load LoRA adapter
28
  lora_model = PeftModel.from_pretrained(base_model, "your_hf_username/llama70b-lora-adapter")
29
 
30
  # Load tokenizer
31
- tokenizer = AutoTokenizer.from_pretrained("togethercomputer/Llama-3-70b-chat-hf")
32
 
33
  # Generate text
34
  input_text = "What are Chris Martz's views on inflation?"
@@ -40,5 +41,4 @@ print(tokenizer.decode(outputs[0]))
40
  Usage: Ideal for tasks requiring Chris Martz’s tone or expertise.
41
  Limitations: This adapter inherits biases and constraints from the base model.
42
 
43
- Developed by sw4geth. Contact via Hugging Face for questions or feedback.
44
-
 
1
  ---
2
+ base_model:
3
+ - meta-llama/Meta-Llama-3-70B-Instruct
4
  library_name: peft
5
  ---
6
 
 
10
 
11
  ## Model Details
12
 
13
+ - **Base model:** [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)
14
  - **Method:** LoRA (Low-Rank Adaptation)
15
  - **Framework:** PEFT
16
  - **Language:** English
 
23
  from peft import PeftModel
24
 
25
  # Load base model
26
+ base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3-70B-Instruct")
27
 
28
  # Load LoRA adapter
29
  lora_model = PeftModel.from_pretrained(base_model, "your_hf_username/llama70b-lora-adapter")
30
 
31
  # Load tokenizer
32
+ tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-70B-Instruct")
33
 
34
  # Generate text
35
  input_text = "What are Chris Martz's views on inflation?"
 
41
  Usage: Ideal for tasks requiring Chris Martz’s tone or expertise.
42
  Limitations: This adapter inherits biases and constraints from the base model.
43
 
44
+ Developed by sw4geth. Contact via Hugging Face for questions or feedback.