Qwen
/

Text Generation
Transformers
Safetensors
Chinese
English
qwen
custom_code

How to run the model in oobabooga-text-generation-webui

#7
by supwang - opened

Hi,

I run the models in oobabooga-text-generation-webui, and got below error:
Anyone know how to solve? Thank you.

Traceback (most recent call last):
File "D:\AI\oobabooga_windows\text-generation-webui\server.py", line 68, in load_model_wrapper
shared.model, shared.tokenizer = load_model(shared.model_name, loader)
File "D:\AI\oobabooga_windows\text-generation-webui\modules\models.py", line 78, in load_model
output = load_func_maploader
File "D:\AI\oobabooga_windows\text-generation-webui\modules\models.py", line 148, in huggingface_loader
model = LoaderClass.from_pretrained(Path(f"{shared.args.model_dir}/{model_name}"), low_cpu_mem_usage=True, torch_dtype=torch.bfloat16 if shared.args.bf16 else torch.float16, trust_remote_code=shared.args.trust_remote_code)
File "D:\AI\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\auto_factory.py", line 480, in from_pretrained
model_class = get_class_from_dynamic_module(
File "D:\AI\oobabooga_windows\installer_files\env\lib\site-packages\transformers\dynamic_module_utils.py", line 431, in get_class_from_dynamic_module
final_module = get_cached_module_file(
File "D:\AI\oobabooga_windows\installer_files\env\lib\site-packages\transformers\dynamic_module_utils.py", line 268, in get_cached_module_file
modules_needed = check_imports(resolved_module_file)
File "D:\AI\oobabooga_windows\installer_files\env\lib\site-packages\transformers\dynamic_module_utils.py", line 151, in check_imports
raise ImportError(
ImportError: This modeling file requires the following packages that were not found in your environment: transformers_stream_generator. Run pip install transformers_stream_generator

pip install transformers_stream_generator呢

pip install transformers_stream_generator呢
谢谢。
试过了,没用。
是要进入什么虚拟环境安装,还是base里安装?

Can try to copy file into \installer_files\env\Lib\site-packages but I encounted another issue as below
“.cache\huggingface\modules\transformers_modules\Qwen_Qwen-7B\tokenization_qwen.py”, line 38, in _load_tiktoken_bpe with open(tiktoken_bpe_file, “rb”) as f: TypeError: expected str, bytes or os.PathLike object, not NoneType

Can try to copy file into \installer_files\env\Lib\site-packages but I encounted another issue as below
“.cache\huggingface\modules\transformers_modules\Qwen_Qwen-7B\tokenization_qwen.py”, line 38, in _load_tiktoken_bpe with open(tiktoken_bpe_file, “rb”) as f: TypeError: expected str, bytes or os.PathLike object, not NoneType

这个问题是textgen-webui自动下载模型文件有问题。
用git自己下载模型即可:

git lfs install
git clone https://huggingface.co/Qwen/Qwen-7B

Can try to copy file into \installer_files\env\Lib\site-packages but I encounted another issue as below
“.cache\huggingface\modules\transformers_modules\Qwen_Qwen-7B\tokenization_qwen.py”, line 38, in _load_tiktoken_bpe with open(tiktoken_bpe_file, “rb”) as f: TypeError: expected str, bytes or os.PathLike object, not NoneType

这个问题是textgen-webui自动下载模型文件有问题。
用git自己下载模型即可:

git lfs install
git clone https://huggingface.co/Qwen/Qwen-7B

确实是自己下载的模型不过好像还是不行

Can try to copy file into \installer_files\env\Lib\site-packages but I encounted another issue as below
“.cache\huggingface\modules\transformers_modules\Qwen_Qwen-7B\tokenization_qwen.py”, line 38, in _load_tiktoken_bpe with open(tiktoken_bpe_file, “rb”) as f: TypeError: expected str, bytes or os.PathLike object, not NoneType

这个问题是textgen-webui自动下载模型文件有问题。
用git自己下载模型即可:

git lfs install
git clone https://huggingface.co/Qwen/Qwen-7B

谢谢,不过好像还是不行。

Traceback (most recent call last):
File "D:\AI\oobabooga_windows\text-generation-webui\modules\ui_model_menu.py", line 185, in load_model_wrapper
shared.model, shared.tokenizer = load_model(shared.model_name, loader)
File "D:\AI\oobabooga_windows\text-generation-webui\modules\models.py", line 79, in load_model
output = load_func_maploader
File "D:\AI\oobabooga_windows\text-generation-webui\modules\models.py", line 149, in huggingface_loader
model = LoaderClass.from_pretrained(Path(f"{shared.args.model_dir}/{model_name}"), low_cpu_mem_usage=True, torch_dtype=torch.bfloat16 if shared.args.bf16 else torch.float16, trust_remote_code=shared.args.trust_remote_code)
File "D:\AI\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\auto_factory.py", line 498, in from_pretrained
model_class = get_class_from_dynamic_module(
File "D:\AI\oobabooga_windows\installer_files\env\lib\site-packages\transformers\dynamic_module_utils.py", line 451, in get_class_from_dynamic_module
final_module = get_cached_module_file(
File "D:\AI\oobabooga_windows\installer_files\env\lib\site-packages\transformers\dynamic_module_utils.py", line 279, in get_cached_module_file
modules_needed = check_imports(resolved_module_file)
File "D:\AI\oobabooga_windows\installer_files\env\lib\site-packages\transformers\dynamic_module_utils.py", line 152, in check_imports
raise ImportError(
ImportError: This modeling file requires the following packages that were not found in your environment: transformers_stream_generator. Run pip install transformers_stream_generator

  1. 进入oobabooga_windows文件夹
  2. 运行cmd_windows.bat
  3. 输入 pip install transformers_stream_generator
  4. 输入 pip install tiktoken
  5. 在web ui 界面中的model页面左侧,勾选trust-remote-code后,load model

it reports error
File "Qwen-7B/modeling_qwen.py", line 1111, in generate
return super().generate(
File "/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "lib/python3.10/site-packages/transformers/generation/utils.py", line 1296, in generate
eos_token_id = eos_token_id[0]
IndexError: list index out of range
Output generated in 0.48 seconds (0.00 tokens/s, 0 tokens, context 61, seed 1066529497)

  1. 进入oobabooga_windows文件夹
  2. 运行cmd_windows.bat
  3. 输入 pip install transformers_stream_generator
  4. 输入 pip install tiktoken
  5. 在web ui 界面中的model页面左侧,勾选trust-remote-code后,load model

谢谢
我将transformers_stream_generator拷贝到.\oobabooga_windows\installer_files\env\Lib\site-packages里,
出现下面这个新报错,请问怎么解? :)
Traceback (most recent call last):
File "D:\AI\oobabooga_windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py", line 479, in load_state_dict
return torch.load(checkpoint_file, map_location=map_location)
File "D:\AI\oobabooga_windows\installer_files\env\lib\site-packages\torch\serialization.py", line 797, in load
with _open_zipfile_reader(opened_file) as opened_zipfile:
File "D:\AI\oobabooga_windows\installer_files\env\lib\site-packages\torch\serialization.py", line 283, in init
super().init(torch._C.PyTorchFileReader(name_or_buffer))
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\AI\oobabooga_windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py", line 483, in load_state_dict
if f.read(7) == "version":
UnicodeDecodeError: 'gbk' codec can't decode byte 0x80 in position 64: illegal multibyte sequence

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\AI\oobabooga_windows\text-generation-webui\modules\ui_model_menu.py", line 185, in load_model_wrapper
shared.model, shared.tokenizer = load_model(shared.model_name, loader)
File "D:\AI\oobabooga_windows\text-generation-webui\modules\models.py", line 79, in load_model
output = load_func_maploader
File "D:\AI\oobabooga_windows\text-generation-webui\modules\models.py", line 149, in huggingface_loader
model = LoaderClass.from_pretrained(Path(f"{shared.args.model_dir}/{model_name}"), low_cpu_mem_usage=True, torch_dtype=torch.bfloat16 if shared.args.bf16 else torch.float16, trust_remote_code=shared.args.trust_remote_code)
File "D:\AI\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\auto_factory.py", line 511, in from_pretrained
return model_class.from_pretrained(
File "D:\AI\oobabooga_windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py", line 2805, in from_pretrained
state_dict = load_state_dict(resolved_archive_file)
File "D:\AI\oobabooga_windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py", line 495, in load_state_dict
raise OSError(
OSError: Unable to load weights from pytorch checkpoint file for 'models\Qwen-7B\pytorch_model.bin' at 'models\Qwen-7B\pytorch_model.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.

搞定了,原来是现在的模型分成8份了,我还用的是最早的10.3GB的那个模型。更换新模型后,可以正常运行了。
但对话时有个问题,就是回复的时候总会多出很多内容?
这是什么情况?

屏幕截图 2023-08-24 090623.png

屏幕截图 2023-08-24 091131.png

supwang changed discussion status to closed
supwang changed discussion status to open

搞定了,原来是现在的模型分成8份了,我还用的是最早的10.3GB的那个模型。更换新模型后,可以正常运行了。
但对话时有个问题,就是回复的时候总会多出很多内容?
这是什么情况?

屏幕截图 2023-08-24 090623.png

屏幕截图 2023-08-24 091131.png

使用Qwen-7B-Chat 这个模型试试:Qwen/Qwen-7B-Chat。

使用Qwen-7B-Chat 这个模型试试:Qwen/Qwen-7B-Chat。

谢谢
用Qwen-7B-Chat试过了,感觉还不如Qwen-7B,似乎是在和精神错乱者聊天。

屏幕截图 2023-08-30 153455.png

屏幕截图 2023-08-30 153435.png

搞定了,原来是现在的模型分成8份了,我还用的是最早的10.3GB的那个模型。更换新模型后,可以正常运行了。
但对话时有个问题,就是回复的时候总会多出很多内容?
这是什么情况?

屏幕截图 2023-08-24 090623.png

屏幕截图 2023-08-24 091131.png

在parameters标签下找到Custom stopping strings,输入
'\nYou:', '\nHuman', '\nAssistant:', '\n你:', '\n###你'
阻断模型自问自答

在parameters标签下找到Custom stopping strings,输入
'\nYou:', '\nHuman', '\nAssistant:', '\n你:', '\n###你'
阻断模型自问自答

谢谢
填入后好像没什么改善,是我参数设置的不对吗?
屏幕截图 2023-09-02 162054.png

屏幕截图 2023-09-02 162111.png

在parameters标签下找到Custom stopping strings,输入
'\nYou:', '\nHuman', '\nAssistant:', '\n你:', '\n###你'
阻断模型自问自答

谢谢
填入后好像没什么改善,是我参数设置的不对吗?
屏幕截图 2023-09-02 162054.png

屏幕截图 2023-09-02 162111.png

你这个截图和之前的问题不是一个,设置的没问题,但是只能防止AI自问自答,对你这个图里的胡言乱语没用
应该看用的什么模型,是不是用的英文模型,看不懂中文就乱讲了……

你这个截图和之前的问题不是一个,设置的没问题,但是只能防止AI自问自答,对你这个图里的胡言乱语没用
应该看用的什么模型,是不是用的英文模型,看不懂中文就乱讲了……

谢谢。
我还是改回使用Qwen-7B,不用Chat版本了。增加了stopping strings后,目前看来还行。
但时不时还是会出现自问自答,胡言乱语的情况。
屏幕截图 2023-09-03 084715.png

你这个截图和之前的问题不是一个,设置的没问题,但是只能防止AI自问自答,对你这个图里的胡言乱语没用
应该看用的什么模型,是不是用的英文模型,看不懂中文就乱讲了……

谢谢。
我还是改回使用Qwen-7B,不用Chat版本了。增加了stopping strings后,目前看来还行。
但时不时还是会出现自问自答,胡言乱语的情况。
屏幕截图 2023-09-03 084715.png

检查下preset是不是用的mirostat,这个需要llama.cpp加载才能用,其他加载器好像就会发疯、

检查下preset是不是用的mirostat,这个需要llama.cpp加载才能用,其他加载器好像就会发疯、

谢谢,用了mirostat和llama.cpp加载,确实好了不少,但还是经常胡说八道,哈哈。
屏幕截图 2023-09-03 201637.png

检查下preset是不是用的mirostat,这个需要llama.cpp加载才能用,其他加载器好像就会发疯、

谢谢,用了mirostat和llama.cpp加载,确实好了不少,但还是经常胡说八道,哈哈。

hhh,但是我的意思是,尽量别用mirostat……mirostat按reddit社区的测试,适合一些模型+long relpies用来长文回复(例如用Hermes做R18场景长文描述),一般情况下oobabooga推荐的是这么几个配置
Instruction following:
Divine Intellect
Big O
simple-1

Chat:
Midnight Enigma
Yara
Shortwave

image.png

image.png

hhh,但是我的意思是,尽量别用mirostat……mirostat按reddit社区的测试,适合一些模型+long relpies用来长文回复(例如用Hermes做R18场景长文描述),一般情况下oobabooga推荐的是这么几个配置
Instruction following:
Divine Intellect
Big O
simple-1

Chat:
Midnight Enigma
Yara
Shortwave

谢谢。
请问,Filter by loader里要选吗?还是保持All就行?

Help me guys,issues below:
Traceback (most recent call last):

File “D:\Downloads\text-generation-webui-main\modules\ui_model_menu.py”, line 201, in load_model_wrapper

shared.model, shared.tokenizer = load_model(shared.model_name, loader)
File “D:\Downloads\text-generation-webui-main\modules\models.py”, line 78, in load_model

output = load_func_maploader
File “D:\Downloads\text-generation-webui-main\modules\models.py”, line 122, in huggingface_loader

config = AutoConfig.from_pretrained(path_to_model, trust_remote_code=params['trust_remote_code'])
File “D:\Downloads\text-generation-webui-main\installer_files\env\lib\site-packages\transformers\models\auto\configuration_auto.py”, line 1037, in from_pretrained

config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File “D:\Downloads\text-generation-webui-main\installer_files\env\lib\site-packages\transformers\configuration_utils.py”, line 620, in get_config_dict

config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File “D:\Downloads\text-generation-webui-main\installer_files\env\lib\site-packages\transformers\configuration_utils.py”, line 675, in _get_config_dict

resolved_config_file = cached_file(
File “D:\Downloads\text-generation-webui-main\installer_files\env\lib\site-packages\transformers\utils\hub.py”, line 400, in cached_file

raise EnvironmentError(
OSError: models\Qwen-7B does not appear to have a file named config.json. Checkout ‘https://huggingface.co/models\Qwen-7B/None’ for available files.

请参考下这几个问题的回复
https://huggingface.co/Qwen/Qwen-7B-Chat/discussions/26#64d306b50f17d186414fe550
https://github.com/QwenLM/Qwen/issues/361

如果是预训练模型的话,没有经过对话式的人类对齐微调,对话模式不是很稳定(即使用了preset)。对话建议使用Chat模型。

Chat模型的输入是ChatML格式的,text-generation-webui用文本生成模式调用Chat模型的话,Text generation那个tab里建议这么填写(不确定它是不是又改版了)

<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
这里填内容<|im_end|>
<|im_start|>assistant

而且text-generation-webui的默认配置与Qwen并不兼容,它是看输出字符串看是否有中止字符串来判断中止的,需要设置skip_special_tokensFalse,且配置custom_stopping_strings"<|im_end|>", "<|im_start|>", "<|endoftext|>",这些可以在Parameters那个tab里设置。

Help me guys,issues below:
Traceback (most recent call last):

File “D:\Downloads\text-generation-webui-main\modules\ui_model_menu.py”, line 201, in load_model_wrapper

shared.model, shared.tokenizer = load_model(shared.model_name, loader)
File “D:\Downloads\text-generation-webui-main\modules\models.py”, line 78, in load_model

output = load_func_maploader
File “D:\Downloads\text-generation-webui-main\modules\models.py”, line 122, in huggingface_loader

config = AutoConfig.from_pretrained(path_to_model, trust_remote_code=params['trust_remote_code'])
File “D:\Downloads\text-generation-webui-main\installer_files\env\lib\site-packages\transformers\models\auto\configuration_auto.py”, line 1037, in from_pretrained

config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File “D:\Downloads\text-generation-webui-main\installer_files\env\lib\site-packages\transformers\configuration_utils.py”, line 620, in get_config_dict

config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File “D:\Downloads\text-generation-webui-main\installer_files\env\lib\site-packages\transformers\configuration_utils.py”, line 675, in _get_config_dict

resolved_config_file = cached_file(
File “D:\Downloads\text-generation-webui-main\installer_files\env\lib\site-packages\transformers\utils\hub.py”, line 400, in cached_file

raise EnvironmentError(
OSError: models\Qwen-7B does not appear to have a file named config.json. Checkout ‘https://huggingface.co/models\Qwen-7B/None’ for available files.

It seems that the file failed to download. Please check if the files were there. You can try download and place the files manually and see if the instructions from otgw can help you. https://github.com/oobabooga/text-generation-webui#downloading-models

请参考下这几个问题的回复
https://huggingface.co/Qwen/Qwen-7B-Chat/discussions/26#64d306b50f17d186414fe550
https://github.com/QwenLM/Qwen/issues/361

如果是预训练模型的话,没有经过对话式的人类对齐微调,对话模式不是很稳定(即使用了preset)。对话建议使用Chat模型。

Chat模型的输入是ChatML格式的,text-generation-webui用文本生成模式调用Chat模型的话,Text generation那个tab里建议这么填写(不确定它是不是又改版了)

<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
这里填内容<|im_end|>
<|im_start|>assistant

而且text-generation-webui的默认配置与Qwen并不兼容,它是看输出字符串看是否有中止字符串来判断中止的,需要设置skip_special_tokensFalse,且配置custom_stopping_strings"<|im_end|>", "<|im_start|>", "<|endoftext|>",这些可以在Parameters那个tab里设置。

谢谢,这个回复到位。
怎么设置text-generation-webui,才能让Qwen处理8万汉字以上的文本?

supwang changed discussion status to closed

Sign up or log in to comment