是否支持text-generation-webui
放入模型文件夹以后报错。
没有问题,我用的 python server.py --model THUDM_chatglm-6b --trust-remote-code --chat
没有问题,我用的
python server.py --model THUDM_chatglm-6b --trust-remote-code --chat
谢谢,成功了!
你好 想請教放入模型文件夾是只放入模型文件嗎?
其他文件例如 config.json, configuration_chatglm.py ...等 需要放入嗎
我想在一個完全離線的機器上運行但遇到以下問題,想詢問我該如何解決
使用指令 python server.py --model local_model_path --trust-remote-code –chat
CMD 先顯上 Failed to load the model
進 text generation webui 中察看 想再嘗試 load 模型 就遇到以下錯誤
Traceback (most recent call last): File “...\transformers\configuration_utils.py”, line 672, in _get_config_dict resolved_config_file = cached_file( ^^^^^^^^^^^^ File “...\transformers\utils\hub.py”, line 417, in cached_file resolved_file = hf_hub_download( ^^^^^^^^^^^^^^^^ File “...\huggingface_hub\utils_validators.py”, line 110, in _inner_fn validate_repo_id(arg_value) File “...\huggingface_hub\utils_validators.py”, line 164, in validate_repo_id raise HFValidationError( huggingface_hub.utils.validators.HFValidationError: Repo id must use alphanumeric chars or ‘-’, '', ‘.’, ‘–’ and ‘…’ are forbidden, ‘-’ and ‘.’ cannot start or end the name, max length is 96: ‘models\chatglm2-6b’.
During handling of the above exception, another exception occurred:
python server.py --model local_model_path --trust-remote-code –chat
Traceback (most recent call last): File “...\oobabooga_windows\oobabooga_windows\text-generation-webui\server.py”, line 68, in load_model_wrapper shared.model, shared.tokenizer = load_model(shared.model_name, loader) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “...\oobabooga_windows\oobabooga_windows\text-generation-webui\modules\models.py”, line 78, in load_model output = load_func_maploader ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “...\oobabooga_windows\oobabooga_windows\text-generation-webui\modules\models.py”, line 218, in huggingface_loader model = LoaderClass.from_pretrained(checkpoint, **params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “...\transformers\models\auto\auto_factory.py”, line 461, in from_pretrained config, kwargs = AutoConfig.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “...\transformers\models\auto\configuration_auto.py”, line 983, in from_pretrained config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “...\transformers\configuration_utils.py”, line 617, in get_config_dict config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “...\transformers\configuration_utils.py”, line 693, in _get_config_dict raise EnvironmentError( OSError: Can’t load the configuration of ‘models\chatglm2-6b’. If you were trying to load it from ‘https://huggingface.co/models’, make sure you don’t have a local directory with the same name. Otherwise, make sure ‘models\chatglm2-6b’ is the correct path to a directory containing a config.json file
关于4bit 支持是否有成熟的方案
参考这个问题 https://github.com/oobabooga/text-generation-webui/issues/5190