Error trying to duplicate

#3
by johnblues - opened

When trying to duplicate, I get the following error:
ValueError: Invalid model path: ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states.pt

Has anyone successfully duplicated this Space?

Hi,

I have changed the path to ckpts. You can retry in 3 ways:

  • Synchronize your space from this one
  • Replace tencent_HunyuanVideo by ckpts in app.py
  • Or duplicate your space a second time

I duplicated the Space again and got this error:
ValueError: Invalid model path: ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states.pt

So the same error.

I have added some logs. Do you see in your logs those ones?
initialize_model: ...
models_root exists: ...
Model initialized: ...

And also this one and the following?
What is dit_weight: ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states.pt

PS: I have slightly changed the code, that may fix the space

This is the output when I just tried to duplicate. It is different from the previous errors.

runtime error
Exit code: 1. Reason: A

mp_rank_00_model_states_fp8.pt: 90%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰ | 11.9G/13.2G [00:09<00:01, 1.31GB/s]
mp_rank_00_model_states_fp8.pt: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰| 13.2G/13.2G [00:10<00:00, 1.30GB/s]

mp_rank_00_model_states_fp8_map.pt: 0%| | 0.00/104k [00:00<?, ?B/s]
mp_rank_00_model_states_fp8_map.pt: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 104k/104k [00:00<00:00, 39.7MB/s]

hunyuan-video-t2v-720p/vae/config.json: 0%| | 0.00/785 [00:00<?, ?B/s]
hunyuan-video-t2v-720p/vae/config.json: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 785/785 [00:00<00:00, 8.40MB/s]

pytorch_model.pt: 0%| | 0.00/986M [00:00<?, ?B/s]

pytorch_model.pt: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰| 986M/986M [00:01<00:00, 918MB/s]
pytorch_model.pt: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰| 986M/986M [00:02<00:00, 460MB/s]
initialize_model: ckpts
models_root exists: ckpts
2025-01-03 07:23:31.750 | INFO | hyvideo.inference:from_pretrained:154 - Got text-to-video model root path: ckpts
2025-01-03 07:23:31.974 | INFO | hyvideo.inference:from_pretrained:189 - Building model...
What is dit_weight: ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states.pt
dit_weight.exists(): False
dit_weight.is_file(): False
dit_weight.is_dir(): False
dit_weight.is_symlink(): False
Traceback (most recent call last):
File "/home/user/app/app.py", line 170, in
demo = create_demo("ckpts")
File "/home/user/app/app.py", line 94, in create_demo
model = initialize_model(model_path)
File "/home/user/app/app.py", line 40, in initialize_model
hunyuan_video_sampler = HunyuanVideoSampler.from_pretrained(models_root_path, args=args)
File "/home/user/app/hyvideo/inference.py", line 203, in from_pretrained
model = Inference.load_state_dict(args, model, pretrained_model_path)
File "/home/user/app/hyvideo/inference.py", line 314, in load_state_dict
print('dit_weight.is_junction(): ' + str(dit_weight.is_junction()))
AttributeError: 'PosixPath' object has no attribute 'is_junction'
Container logs:

===== Application Startup at 2025-01-03 06:20:03 =====

The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling transformers.utils.move_cache().

initialize_model: ckpts
models_root exists: ckpts
2025-01-03 07:23:31.750 | INFO | hyvideo.inference:from_pretrained:154 - Got text-to-video model root path: ckpts
2025-01-03 07:23:31.974 | INFO | hyvideo.inference:from_pretrained:189 - Building model...
What is dit_weight: ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states.pt
dit_weight.exists(): False
dit_weight.is_file(): False
dit_weight.is_dir(): False
dit_weight.is_symlink(): False
Traceback (most recent call last):
File "/home/user/app/app.py", line 170, in
demo = create_demo("ckpts")
File "/home/user/app/app.py", line 94, in create_demo
model = initialize_model(model_path)
File "/home/user/app/app.py", line 40, in initialize_model
hunyuan_video_sampler = HunyuanVideoSampler.from_pretrained(models_root_path, args=args)
File "/home/user/app/hyvideo/inference.py", line 203, in from_pretrained
model = Inference.load_state_dict(args, model, pretrained_model_path)
File "/home/user/app/hyvideo/inference.py", line 314, in load_state_dict
print('dit_weight.is_junction(): ' + str(dit_weight.is_junction()))
AttributeError: 'PosixPath' object has no attribute 'is_junction'
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling transformers.utils.move_cache().

OK, you can retry. (now it download with snapshot and not file by file)

May I guess it's working now? ๐Ÿ™‚

No, still getting and error. I just kind of got frustrated and gave up.

runtime error
Exit code: 1. Reason: coder model (llm) from: ./ckpts/text_encoder
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/transformers/utils/hub.py", line 403, in cached_file
resolved_file = hf_hub_download(
File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 106, in _inner_fn
validate_repo_id(arg_value)
File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 154, in validate_repo_id
raise HFValidationError(
huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': './ckpts/text_encoder'. Use repo_type argument if needed.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/user/app/app.py", line 167, in
demo = create_demo("ckpts")
File "/home/user/app/app.py", line 86, in create_demo
model = initialize_model(model_path)
File "/home/user/app/app.py", line 32, in initialize_model
hunyuan_video_sampler = HunyuanVideoSampler.from_pretrained(models_root_path, args=args)
File "/home/user/app/hyvideo/inference.py", line 241, in from_pretrained
text_encoder = TextEncoder(
File "/home/user/app/hyvideo/text_encoder/init.py", line 180, in init
self.model, self.model_path = load_text_encoder(
File "/home/user/app/hyvideo/text_encoder/init.py", line 36, in load_text_encoder
text_encoder = AutoModel.from_pretrained(
File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 487, in from_pretrained
resolved_config_file = cached_file(
File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/transformers/utils/hub.py", line 469, in cached_file
raise EnvironmentError(
OSError: Incorrect path_or_model_id: './ckpts/text_encoder'. Please provide either the path to a local folder or the repo_id of a model on the Hub.

FYI, I have updated the space thanks to your log

Sign up or log in to comment