File size: 8,065 Bytes
67b8ead |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 |
[切换到中文版本](README_zh.md)
[Switch to English Version](README.md)
# Noname AI
Projects related to Noname AI/Noname AI, involving AI programs aimed at generating Noname skill codes by inputting skill effects.
[modelscope Online Experience](https://www.modelscope.cn/studios/huskyhong/nonameai)
Due to limited computing power, the online experience version is only a lightweight CPU version with limited precision. If needed, please choose the GPU version or full version for inference.
Fine-tuned from QWen.
## Configuration Requirements
To better meet usage requirements, please try to meet the following requirements:
- Computer (required)
- Hard disk storage space of 20G or more (required)
- If using the full non-quantized version/GPU version lazy one-click package, for computers with NVIDIA graphics cards, GPU inference is used, requiring half of the graphics memory + computer physical memory (physical memory does not include virtual memory) >= 16G
- If using the full non-quantized version/CPU version lazy one-click package, CPU inference is used, requiring memory (including virtual memory) to be as close as possible to >= 32G for computers without graphics cards
- If using the lightweight version/GPU version lightweight lazy one-click package, for computers with NVIDIA graphics cards, GPU inference is used, requiring half of the graphics memory + computer physical memory (physical memory does not include virtual memory) >= 4G
- If using the lightweight version/CPU version lightweight lazy one-click package, CPU inference is used, requiring memory (including virtual memory) to be as close as possible to >= 12G for computers without graphics cards
## Usage
### Full Model Method
1. Install Python and the corresponding Python compiler.
- Note: Python compatible versions are 3.8, 3.9, 3.10, 3.11. Please do not install versions that are too high or too low.
2. Enter the following command in the terminal to install the required environment:
```bash
pip install -r requirements.txt
```
3. Run the program using the following Python code. The model will be automatically downloaded, and the code defaults to version 2.0 full version.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
tokenizer = AutoTokenizer.from_pretrained("huskyhong/noname-ai-v2_5", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("huskyhong/noname-ai-v2_5", device_map="auto", trust_remote_code=True).eval() # Load the model using GPU
# model = AutoModelForCausalLM.from_pretrained("huskyhong/noname-ai-v2_5", device_map="cpu", trust_remote_code=True).eval() # Load the model using CPU
5model.generation_config = GenerationConfig.from_pretrained("huskyhong/noname-ai-v2_5", trust_remote_code=True) # You can specify different generation lengths, top_p, 和 other related hyperparameters
# For the first generation model, replace "huskyhong/noname-ai-v2_5" with "huskyhong/noname-ai-v1". For lightweight version v2.5 model, replace "huskyhong/noname-ai-v2_5" with "huskyhong/noname-ai-v2_5-light"
prompt = "请帮我编写一个技能,技能效果如下:" + input("请输入技能效果:")
response, history = model.chat(tokenizer, prompt, history = [])
print(response)
prompt = "请帮我编写一张卡牌,卡牌效果如下::" + input("请输入卡牌效果:")
response, history = model.chat(tokenizer, prompt, history = [])
print(response)
```
Alternatively, you can use Hugging Face's pipeline for inference.
```python
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM, GenerationConfig
generator = pipeline(
"text-generation",
model="huskyhong/noname-ai-v2_5",
tokenizer="huskyhong/noname-ai-v2_5",
device=0, # Choose GPU device. If you want to use CPU, you can set device=-1
trust_remote_code=True
)
prompt = "请帮我编写一个技能,技能效果如下:" + input("请输入技能效果:")
response = generator(prompt, max_length=50, top_p=0.95) # You can adjust parameters such as generation length, top_p as needed
print(response[0]['generated_text'])
prompt = "请帮我编写一张卡牌,卡牌效果如下:" + input("请输入卡牌效果:")
response = generator(prompt, max_length=50, top_p=0.95) # You can adjust parameters such as generation length, top_p as needed
print(response[0]['generated_text'])
```
4. If automatic downloading fails, you can manually download the model files and modify "huskyhong/noname-ai-v2" to the corresponding location in the code.
Download links for the second-generation model:
- [v2.5 Hugging Face address (full version)](https://huggingface.co/huskyhong/noname-ai-v2_5)
- [v2.5 Hugging Face address (lightweight version)](https://huggingface.co/huskyhong/noname-ai-v2_5-light)
- [Baidu Netdisk address](https://pan.baidu.com/s/1m9RfGqnuQbRYROE_UzuG-Q?pwd=6666) Baidu Netdisk extraction code: 6666
Download links for the first-generation model:
- [Hugging Face address](https://huggingface.co/huskyhong/noname-ai-v1)
- [Baidu Netdisk address](https://pan.baidu.com/s/1Ox471XuHF_gJbcPPnSZe7g?pwd=6666) Baidu Netdisk extraction code: 6666
Remember to choose whether to load the model using GPU or CPU, and replace `your_model_name` with your actual model path.
## Lazy One-Click Package
- One-click installation, no worries.
- Please choose the appropriate lazy one-click package according to your own configuration.
- [Lazy One-Click Package Baidu Netdisk Download Address (Updated to v2.5)](https://pan.baidu.com/s/1zIcRZtQv5oIdu7_abie9Vw?pwd=6666) Baidu Netdisk extraction code: 6666
- [Lazy One-Click Package 123 Netdisk Download Address (Updated to v2.5)](https://www.123pan.com/s/lOcnjv-pnOG3.html) 123 Netdisk extraction code: 6666
- Please pay attention to the version time of the lazy one-click package to ensure that the version is the latest!
- Lazy package related videos
- [Comparison of Effects of Lazy Package v2.5](https://www.bilibili.com/video/BV1KKY4e8EaC/)
## Web Version/Server Deployment
- Install Python
- Install dependencies
```bash
pip install -r requirements.txt
```
- Install Streamlit
```bash
pip install streamlit
```
- Allow port 8501 on the server (can also be changed to others, corresponding to webdemo.py file)
- Run webdemo
```bash
streamlit run webdemo.py
```
## Training
Training requires installing new dependencies:
```python
pip install peft deepspeed
```
Clone the project和download the v2.3 version of the model files, taking the lightweight version as an example:
```bash
git lfs install
git clone https://github.com/204313508/noname_llm.git
git clone https://huggingface.co/huskyhong/noname-ai-v2_3-light
cd noname_llm/finetune
```
Modify the parameters required for training in the finetune script, such as model and dataset locations, then enter the following command to start training:
```bash
bash finetune.sh
```
Please refer to the [Fine-tuning Guide](./finetune/README.md) for detailed steps.
## Web Version/Server Example
![webdemo1](./webdemo1.png)
![webdemo2](./webdemo2.png)
## Notes
- AI generation is subject to uncontrollable factors, and the generated code does not guarantee 100% effectiveness. Bugs, redundant code, or additional special characters may still occur and require manual modification.
- (Important) Follow AI specifications. This AI model is for learning and communication purposes only. Please do not use it for illegal or commercial purposes. The purpose of releasing this model is to encourage better learning and communication, and all related information involved in the model is public. I bear no responsibility for malicious use of this AI model.
## Other Content
If you have any related questions, please raise them in the official GitHub issue.
## Demo Images
These demo images are based on version 2.3 release.
![demo](./demo.png)
## Sponsorship
- Shamelessly begging for sponsorship
![sponsor](./sponsor.jpg)
|