sasakipeter commited on
Commit
2118cda
1 Parent(s): 7cdcb58

Update Usage

Browse files
Files changed (1) hide show
  1. README.md +9 -8
README.md CHANGED
@@ -33,9 +33,10 @@ The dataset includes Japanese instruction-response pairs and has been tailored f
33
 
34
  ## Usage
35
 
36
- 1. Install Required Libraries
37
 
38
- ```
 
 
39
  !pip install -U bitsandbytes
40
  !pip install -U transformers
41
  !pip install -U accelerate
@@ -43,9 +44,9 @@ The dataset includes Japanese instruction-response pairs and has been tailored f
43
  !pip install -U peft
44
  ```
45
 
46
- 2. Load the Model and Libraries
47
 
48
- ```
49
  from transformers import (
50
  AutoModelForCausalLM,
51
  AutoTokenizer,
@@ -69,9 +70,9 @@ bnb_config = BitsAndBytesConfig(
69
  )
70
  ```
71
 
72
- 3. Load the Base Model and LoRA Adapter
73
 
74
- ```
75
  # Load base model with 4-bit quantization
76
  model = AutoModelForCausalLM.from_pretrained(
77
  base_model_id,
@@ -91,9 +92,9 @@ tokenizer = AutoTokenizer.from_pretrained(
91
  model = PeftModel.from_pretrained(model, adapter_id, token=HF_TOKEN)
92
  ```
93
 
94
- 4. Perform Inference
95
 
96
- ```
97
  # Example input prompt
98
  input_text = """次の文章を要約してください。
99
 
 
33
 
34
  ## Usage
35
 
 
36
 
37
+ ### 1. Install Required Libraries
38
+
39
+ ```python
40
  !pip install -U bitsandbytes
41
  !pip install -U transformers
42
  !pip install -U accelerate
 
44
  !pip install -U peft
45
  ```
46
 
47
+ ### 2. Load the Model and Libraries
48
 
49
+ ```python
50
  from transformers import (
51
  AutoModelForCausalLM,
52
  AutoTokenizer,
 
70
  )
71
  ```
72
 
73
+ ### 3. Load the Base Model and LoRA Adapter
74
 
75
+ ```python
76
  # Load base model with 4-bit quantization
77
  model = AutoModelForCausalLM.from_pretrained(
78
  base_model_id,
 
92
  model = PeftModel.from_pretrained(model, adapter_id, token=HF_TOKEN)
93
  ```
94
 
95
+ ### 4. Perform Inference
96
 
97
+ ```python
98
  # Example input prompt
99
  input_text = """次の文章を要約してください。
100