ZJU-Fangyin commited on
Commit
19c153d
1 Parent(s): 5948b3f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -0
README.md CHANGED
@@ -87,6 +87,26 @@ Instructions for running it can be found at https://github.com/zjunlp/Mol-Instru
87
  </details>
88
 
89
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
90
  <h3> 🚨 Limitations</h3>
91
 
92
  The current state of the model, obtained via instruction tuning, is a preliminary demonstration. Its capacity to handle real-world, production-grade tasks remains limited.
 
87
  </details>
88
 
89
 
90
+ <h3> 📝 Demo</h3>
91
+
92
+ As illustrated in [our repository](https://github.com/zjunlp/Mol-Instructions/tree/main/demo), we provide an example to perform generation.
93
+
94
+ ```shell
95
+ >> python generate.py \
96
+ --CLI True \
97
+ --protein False\
98
+ --load_8bit \
99
+ --base_model $BASE_MODEL_PATH \
100
+ --lora_weights $FINETUNED_MODEL_PATH \
101
+ ```
102
+
103
+ Please download [llama-7b-hf](https://huggingface.co/decapoda-research/llama-7b-hf/tree/main) to obtain the pre-training weights of LLaMA-7B, refine the `--base_model` to point towards the location where the model weights are saved.
104
+
105
+ For model fine-tuned on **molecule-oriented** instructions, set `$FINETUNED_MODEL_PATH` to `'zjunlp/llama-molinst-molecule-7b'`.
106
+
107
+
108
+
109
+
110
  <h3> 🚨 Limitations</h3>
111
 
112
  The current state of the model, obtained via instruction tuning, is a preliminary demonstration. Its capacity to handle real-world, production-grade tasks remains limited.