This is a simple webpage demonstrating basic CSS styling.
+ +
+ The Real First Universal Charset Detector
+
+
+
+
+
+
+
+
+
+
+ Featured Packages
+
+
+
+
+
+
+
+ In other language (unofficial port - by the community)
+
+
+
+
+ >>>>> 👉 Try Me Online Now, Then Adopt Me 👈 <<<<< +
+ +This project offers you an alternative to **Universal Charset Encoding Detector**, also known as **Chardet**. + +| Feature | [Chardet](https://github.com/chardet/chardet) | Charset Normalizer | [cChardet](https://github.com/PyYoshi/cChardet) | +|--------------------------------------------------|:---------------------------------------------:|:--------------------------------------------------------------------------------------------------:|:-----------------------------------------------:| +| `Fast` | ❌ | ✅ | ✅ | +| `Universal**` | ❌ | ✅ | ❌ | +| `Reliable` **without** distinguishable standards | ❌ | ✅ | ✅ | +| `Reliable` **with** distinguishable standards | ✅ | ✅ | ✅ | +| `License` | LGPL-2.1
+
+
+
+
+
+
+ The official Python client for the Huggingface Hub. +
+ + + ++ English | + Deutsch | + हिंदी | + 한국어 | + 中文(简体) +
+
+
+---
+
+**Documentation**: https://hf.co/docs/huggingface_hub
+
+**Source Code**: https://github.com/huggingface/huggingface_hub
+
+---
+
+## Welcome to the huggingface_hub library
+
+The `huggingface_hub` library allows you to interact with the [Hugging Face Hub](https://huggingface.co/), a platform democratizing open-source Machine Learning for creators and collaborators. Discover pre-trained models and datasets for your projects or play with the thousands of machine learning apps hosted on the Hub. You can also create and share your own models, datasets and demos with the community. The `huggingface_hub` library provides a simple way to do all these things with Python.
+
+## Key features
+
+- [Download files](https://huggingface.co/docs/huggingface_hub/en/guides/download) from the Hub.
+- [Upload files](https://huggingface.co/docs/huggingface_hub/en/guides/upload) to the Hub.
+- [Manage your repositories](https://huggingface.co/docs/huggingface_hub/en/guides/repository).
+- [Run Inference](https://huggingface.co/docs/huggingface_hub/en/guides/inference) on deployed models.
+- [Search](https://huggingface.co/docs/huggingface_hub/en/guides/search) for models, datasets and Spaces.
+- [Share Model Cards](https://huggingface.co/docs/huggingface_hub/en/guides/model-cards) to document your models.
+- [Engage with the community](https://huggingface.co/docs/huggingface_hub/en/guides/community) through PRs and comments.
+
+## Installation
+
+Install the `huggingface_hub` package with [pip](https://pypi.org/project/huggingface-hub/):
+
+```bash
+pip install huggingface_hub
+```
+
+If you prefer, you can also install it with [conda](https://huggingface.co/docs/huggingface_hub/en/installation#install-with-conda).
+
+In order to keep the package minimal by default, `huggingface_hub` comes with optional dependencies useful for some use cases. For example, if you want have a complete experience for Inference, run:
+
+```bash
+pip install huggingface_hub[inference]
+```
+
+To learn more installation and optional dependencies, check out the [installation guide](https://huggingface.co/docs/huggingface_hub/en/installation).
+
+## Quick start
+
+### Download files
+
+Download a single file
+
+```py
+from huggingface_hub import hf_hub_download
+
+hf_hub_download(repo_id="tiiuae/falcon-7b-instruct", filename="config.json")
+```
+
+Or an entire repository
+
+```py
+from huggingface_hub import snapshot_download
+
+snapshot_download("stabilityai/stable-diffusion-2-1")
+```
+
+Files will be downloaded in a local cache folder. More details in [this guide](https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache).
+
+### Login
+
+The Hugging Face Hub uses tokens to authenticate applications (see [docs](https://huggingface.co/docs/hub/security-tokens)). To log in your machine, run the following CLI:
+
+```bash
+huggingface-cli login
+# or using an environment variable
+huggingface-cli login --token $HUGGINGFACE_TOKEN
+```
+
+### Create a repository
+
+```py
+from huggingface_hub import create_repo
+
+create_repo(repo_id="super-cool-model")
+```
+
+### Upload files
+
+Upload a single file
+
+```py
+from huggingface_hub import upload_file
+
+upload_file(
+ path_or_fileobj="/home/lysandre/dummy-test/README.md",
+ path_in_repo="README.md",
+ repo_id="lysandre/test-model",
+)
+```
+
+Or an entire folder
+
+```py
+from huggingface_hub import upload_folder
+
+upload_folder(
+ folder_path="/path/to/local/space",
+ repo_id="username/my-cool-space",
+ repo_type="space",
+)
+```
+
+For details in the [upload guide](https://huggingface.co/docs/huggingface_hub/en/guides/upload).
+
+## Integrating to the Hub.
+
+We're partnering with cool open source ML libraries to provide free model hosting and versioning. You can find the existing integrations [here](https://huggingface.co/docs/hub/libraries).
+
+The advantages are:
+
+- Free model or dataset hosting for libraries and their users.
+- Built-in file versioning, even with very large files, thanks to a git-based approach.
+- Serverless inference API for all models publicly available.
+- In-browser widgets to play with the uploaded models.
+- Anyone can upload a new model for your library, they just need to add the corresponding tag for the model to be discoverable.
+- Fast downloads! We use Cloudfront (a CDN) to geo-replicate downloads so they're blazing fast from anywhere on the globe.
+- Usage stats and more features to come.
+
+If you would like to integrate your library, feel free to open an issue to begin the discussion. We wrote a [step-by-step guide](https://huggingface.co/docs/hub/adding-a-library) with ❤️ showing how to do this integration.
+
+## Contributions (feature requests, bugs, etc.) are super welcome 💙💚💛💜🧡❤️
+
+Everyone is welcome to contribute, and we value everybody's contribution. Code is not the only way to help the community.
+Answering questions, helping others, reaching out and improving the documentations are immensely valuable to the community.
+We wrote a [contribution guide](https://github.com/huggingface/huggingface_hub/blob/main/CONTRIBUTING.md) to summarize
+how to get started to contribute to this repository.
+
+
diff --git a/env/Lib/site-packages/huggingface_hub-0.29.1.dist-info/RECORD b/env/Lib/site-packages/huggingface_hub-0.29.1.dist-info/RECORD
new file mode 100644
index 0000000000000000000000000000000000000000..2b0b66a38fbddb5e50db97954a4310d2649a061f
--- /dev/null
+++ b/env/Lib/site-packages/huggingface_hub-0.29.1.dist-info/RECORD
@@ -0,0 +1,256 @@
+../../Scripts/huggingface-cli.exe,sha256=jPydxwYOgK7NV5SfNkpDQA-xD-aFEHknFsydS0cljyc,108424
+huggingface_hub-0.29.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
+huggingface_hub-0.29.1.dist-info/LICENSE,sha256=xx0jnfkXJvxRnG63LTGOxlggYnIysveWIZ6H3PNdCrQ,11357
+huggingface_hub-0.29.1.dist-info/METADATA,sha256=B8dl2q55ILPp7jxGZ3Nx0zT0AlCd74Z0ipYxygbW3FI,13480
+huggingface_hub-0.29.1.dist-info/RECORD,,
+huggingface_hub-0.29.1.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
+huggingface_hub-0.29.1.dist-info/WHEEL,sha256=tZoeGjtWxWRfdplE7E3d45VPlLNQnvbKiYnx7gwAy8A,92
+huggingface_hub-0.29.1.dist-info/entry_points.txt,sha256=Y3Z2L02rBG7va_iE6RPXolIgwOdwUFONyRN3kXMxZ0g,131
+huggingface_hub-0.29.1.dist-info/top_level.txt,sha256=8KzlQJAY4miUvjAssOAJodqKOw3harNzuiwGQ9qLSSk,16
+huggingface_hub/__init__.py,sha256=T-o7tRMXCYjO5nPSgmN_PAVEpFlQTOp7gh-gh8ucXak,48761
+huggingface_hub/__pycache__/__init__.cpython-312.pyc,,
+huggingface_hub/__pycache__/_commit_api.cpython-312.pyc,,
+huggingface_hub/__pycache__/_commit_scheduler.cpython-312.pyc,,
+huggingface_hub/__pycache__/_inference_endpoints.cpython-312.pyc,,
+huggingface_hub/__pycache__/_local_folder.cpython-312.pyc,,
+huggingface_hub/__pycache__/_login.cpython-312.pyc,,
+huggingface_hub/__pycache__/_snapshot_download.cpython-312.pyc,,
+huggingface_hub/__pycache__/_space_api.cpython-312.pyc,,
+huggingface_hub/__pycache__/_tensorboard_logger.cpython-312.pyc,,
+huggingface_hub/__pycache__/_upload_large_folder.cpython-312.pyc,,
+huggingface_hub/__pycache__/_webhooks_payload.cpython-312.pyc,,
+huggingface_hub/__pycache__/_webhooks_server.cpython-312.pyc,,
+huggingface_hub/__pycache__/community.cpython-312.pyc,,
+huggingface_hub/__pycache__/constants.cpython-312.pyc,,
+huggingface_hub/__pycache__/errors.cpython-312.pyc,,
+huggingface_hub/__pycache__/fastai_utils.cpython-312.pyc,,
+huggingface_hub/__pycache__/file_download.cpython-312.pyc,,
+huggingface_hub/__pycache__/hf_api.cpython-312.pyc,,
+huggingface_hub/__pycache__/hf_file_system.cpython-312.pyc,,
+huggingface_hub/__pycache__/hub_mixin.cpython-312.pyc,,
+huggingface_hub/__pycache__/inference_api.cpython-312.pyc,,
+huggingface_hub/__pycache__/keras_mixin.cpython-312.pyc,,
+huggingface_hub/__pycache__/lfs.cpython-312.pyc,,
+huggingface_hub/__pycache__/repocard.cpython-312.pyc,,
+huggingface_hub/__pycache__/repocard_data.cpython-312.pyc,,
+huggingface_hub/__pycache__/repository.cpython-312.pyc,,
+huggingface_hub/_commit_api.py,sha256=TqXmu5moVAhBa7iuyJdsqsfRTxTpGMnvsPkb4GgC3dc,32636
+huggingface_hub/_commit_scheduler.py,sha256=tfIoO1xWHjTJ6qy6VS6HIoymDycFPg0d6pBSZprrU2U,14679
+huggingface_hub/_inference_endpoints.py,sha256=SLoZOQtv_hNl0Xuafo34L--zuCZ3zSJja2tSkYkG5V4,17268
+huggingface_hub/_local_folder.py,sha256=ScpCJUITFC0LMkiebyaGiBhAU6fvQK8w7pVV6L8rhmc,16575
+huggingface_hub/_login.py,sha256=ssf4viT5BhHI2ZidnSuAZcrwSxzaLOrf8xgRVKuvu_A,20298
+huggingface_hub/_snapshot_download.py,sha256=zZDaPBb4CfMCU7DgxjbaFmdoISCY425RaH7wXwFijEM,14992
+huggingface_hub/_space_api.py,sha256=QVOUNty2T4RxPoxf9FzUjXmjHiGXP0mqXJzqQ7GmoJo,5363
+huggingface_hub/_tensorboard_logger.py,sha256=ZkYcAUiRC8RGL214QUYtp58O8G5tn-HF6DCWha9imcA,8358
+huggingface_hub/_upload_large_folder.py,sha256=eedUTowflZx1thFVLDv7hLd_LQqixa5NVsUco7R6F5c,23531
+huggingface_hub/_webhooks_payload.py,sha256=Xm3KaK7tCOGBlXkuZvbym6zjHXrT1XCrbUFWuXiBmNY,3617
+huggingface_hub/_webhooks_server.py,sha256=oCvpFrYjrhJjClAMw26SQfvN4DUItgK2IhFp1OVh2bU,15623
+huggingface_hub/commands/__init__.py,sha256=AkbM2a-iGh0Vq_xAWhK3mu3uZ44km8-X5uWjKcvcrUQ,928
+huggingface_hub/commands/__pycache__/__init__.cpython-312.pyc,,
+huggingface_hub/commands/__pycache__/_cli_utils.cpython-312.pyc,,
+huggingface_hub/commands/__pycache__/delete_cache.cpython-312.pyc,,
+huggingface_hub/commands/__pycache__/download.cpython-312.pyc,,
+huggingface_hub/commands/__pycache__/env.cpython-312.pyc,,
+huggingface_hub/commands/__pycache__/huggingface_cli.cpython-312.pyc,,
+huggingface_hub/commands/__pycache__/lfs.cpython-312.pyc,,
+huggingface_hub/commands/__pycache__/repo_files.cpython-312.pyc,,
+huggingface_hub/commands/__pycache__/scan_cache.cpython-312.pyc,,
+huggingface_hub/commands/__pycache__/tag.cpython-312.pyc,,
+huggingface_hub/commands/__pycache__/upload.cpython-312.pyc,,
+huggingface_hub/commands/__pycache__/upload_large_folder.cpython-312.pyc,,
+huggingface_hub/commands/__pycache__/user.cpython-312.pyc,,
+huggingface_hub/commands/__pycache__/version.cpython-312.pyc,,
+huggingface_hub/commands/_cli_utils.py,sha256=Nt6CjbkYqQQRuh70bUXVA6rZpbZt_Sa1WqBUxjQLu6g,2095
+huggingface_hub/commands/delete_cache.py,sha256=Rb1BtIltJPnQ-th7tcK_L4mFqfk785t3KXV77xXKBP4,16131
+huggingface_hub/commands/download.py,sha256=1YXKttB8YBX7SJ0Jxg0t1n8yp2BUZXtY0ck6DhCg-XE,8183
+huggingface_hub/commands/env.py,sha256=yYl4DSS14V8t244nAi0t77Izx5LIdgS_dy6xiV5VQME,1226
+huggingface_hub/commands/huggingface_cli.py,sha256=ZwW_nwgppyj-GA6iM3mgmbXMZ63bgtpGl_yIQDyWS4A,2414
+huggingface_hub/commands/lfs.py,sha256=xdbnNRO04UuQemEhUGT809jFgQn9Rj-SnyT_0Ph-VYg,7342
+huggingface_hub/commands/repo_files.py,sha256=Nfv8TjuaZVOrj7TZjrojtjdD8Wf54aZvYPDEOevh7tA,4923
+huggingface_hub/commands/scan_cache.py,sha256=xdD_zRKd49hRuATyptG-zaY08h1f9CAjB5zZBKe0YEo,8563
+huggingface_hub/commands/tag.py,sha256=0LNQZyK-WKi0VIL9i1xWzKxJ1ILw1jxMF_E6t2weJss,6288
+huggingface_hub/commands/upload.py,sha256=xMExm68YcR8R_dDRi3bcIC1qVCvRFRW7aP_AGxGZ1rc,13656
+huggingface_hub/commands/upload_large_folder.py,sha256=P-EO44JWVl39Ax4b0E0Z873d0a6S38Qas8P6DaL1EwI,6129
+huggingface_hub/commands/user.py,sha256=M6Ef045YcyV4mFCbLaTRPciQDC6xtV9MMheeen69D0E,11168
+huggingface_hub/commands/version.py,sha256=vfCJn7GO1m-DtDmbdsty8_RTVtnZ7lX6MJsx0Bf4e-s,1266
+huggingface_hub/community.py,sha256=4MtcoxEI9_0lmmilBEnvUEi8_O1Ivfa8p6eKxYU5-ts,12198
+huggingface_hub/constants.py,sha256=JOswJMnb45udoZibIcH5v71gILOKvVHBjpCqGZK5xDw,8560
+huggingface_hub/errors.py,sha256=zble0j94ai8zwyM0a2DovwcF372zQohwDsgajTsaxqI,9703
+huggingface_hub/fastai_utils.py,sha256=DpeH9d-6ut2k_nCAAwglM51XmRmgfbRe2SPifpVL5Yk,16745
+huggingface_hub/file_download.py,sha256=CU8ZANwJ4nf436jDCP9Ru8qEvdbZD4QznvAo6vbTO_4,70613
+huggingface_hub/hf_api.py,sha256=g81_Vs2n08Hm4kksO6QoNLDWkYaSnIjPdiu-qZOgMks,423772
+huggingface_hub/hf_file_system.py,sha256=m_g7uYLGxTdsBnhvR5835jvYMAuEBsUSFvEbzZKzzoo,47500
+huggingface_hub/hub_mixin.py,sha256=-oTnuB3b-0WeutZ1iBkAy1YuWrBKvHBVBpmd3-7oGB4,37419
+huggingface_hub/inference/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
+huggingface_hub/inference/__pycache__/__init__.cpython-312.pyc,,
+huggingface_hub/inference/__pycache__/_client.cpython-312.pyc,,
+huggingface_hub/inference/__pycache__/_common.cpython-312.pyc,,
+huggingface_hub/inference/_client.py,sha256=aiiVqLiYisaEZTOxkv90vGhsdIM-fvXdhuDwhoNbjSQ,162205
+huggingface_hub/inference/_common.py,sha256=iwCkq2fWE1MVoPTeeXN7UN5FZi7g5fZ3K8PHSOCi5dU,14591
+huggingface_hub/inference/_generated/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
+huggingface_hub/inference/_generated/__pycache__/__init__.cpython-312.pyc,,
+huggingface_hub/inference/_generated/__pycache__/_async_client.cpython-312.pyc,,
+huggingface_hub/inference/_generated/_async_client.py,sha256=barbsIBB5d76l3zO3Tj_2WV6Phmwfjtuq7277qHfOYg,168438
+huggingface_hub/inference/_generated/types/__init__.py,sha256=CJwdkaPbR-vzCWU1ITr4aHOHax87JaewaIs_7rKaRXE,6274
+huggingface_hub/inference/_generated/types/__pycache__/__init__.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/audio_classification.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/audio_to_audio.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/automatic_speech_recognition.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/base.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/chat_completion.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/depth_estimation.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/document_question_answering.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/feature_extraction.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/fill_mask.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/image_classification.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/image_segmentation.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/image_to_image.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/image_to_text.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/object_detection.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/question_answering.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/sentence_similarity.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/summarization.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/table_question_answering.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/text2text_generation.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/text_classification.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/text_generation.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/text_to_audio.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/text_to_image.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/text_to_speech.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/text_to_video.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/token_classification.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/translation.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/video_classification.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/visual_question_answering.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/zero_shot_classification.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/zero_shot_image_classification.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/__pycache__/zero_shot_object_detection.cpython-312.pyc,,
+huggingface_hub/inference/_generated/types/audio_classification.py,sha256=Jg3mzfGhCSH6CfvVvgJSiFpkz6v4nNA0G4LJXacEgNc,1573
+huggingface_hub/inference/_generated/types/audio_to_audio.py,sha256=2Ep4WkePL7oJwcp5nRJqApwviumGHbft9HhXE9XLHj4,891
+huggingface_hub/inference/_generated/types/automatic_speech_recognition.py,sha256=lWD_BMDMS3hreIq0kcLwOa8e0pXRH-oWUK96VaVc5DM,5624
+huggingface_hub/inference/_generated/types/base.py,sha256=4XG49q0-2SOftYQ8HXQnWLxiJktou-a7IoG3kdOv-kg,6751
+huggingface_hub/inference/_generated/types/chat_completion.py,sha256=rJUsET-Lqgt3AlW2zPIxOHc7XmhAZmaolbV8TGu4MmE,9885
+huggingface_hub/inference/_generated/types/depth_estimation.py,sha256=rcpe9MhYMeLjflOwBs3KMZPr6WjOH3FYEThStG-FJ3M,929
+huggingface_hub/inference/_generated/types/document_question_answering.py,sha256=6BEYGwJcqGlah4RBJDAvWFTEXkO0mosBiMy82432nAM,3202
+huggingface_hub/inference/_generated/types/feature_extraction.py,sha256=NMWVL_TLSG5SS5bdt1-fflkZ75UMlMKeTMtmdnUTADc,1537
+huggingface_hub/inference/_generated/types/fill_mask.py,sha256=OrTgQ7Ndn0_dWK5thQhZwTOHbQni8j0iJcx9llyhRds,1708
+huggingface_hub/inference/_generated/types/image_classification.py,sha256=A-Y024o8723_n8mGVos4TwdAkVL62McGeL1iIo4VzNs,1585
+huggingface_hub/inference/_generated/types/image_segmentation.py,sha256=vrkI4SuP1Iq_iLXc-2pQhYY3SHN4gzvFBoZqbUHxU7o,1950
+huggingface_hub/inference/_generated/types/image_to_image.py,sha256=uhJO63Ny3qhsN7KY9Y2rj1rzFuYaPczz5dlgDNOx-5k,1954
+huggingface_hub/inference/_generated/types/image_to_text.py,sha256=3hN7lpJoVuwUJme5gDdxZmXftb6cQ_7SXVC1VM8rXh8,4919
+huggingface_hub/inference/_generated/types/object_detection.py,sha256=VuFlb1281qTXoSgJDmquGz-VNfEZLo2H0Rh_F6MF6ts,2000
+huggingface_hub/inference/_generated/types/question_answering.py,sha256=zw38a9_9l2k1ifYZefjkioqZ4asfSRM9M4nU3gSCmAQ,2898
+huggingface_hub/inference/_generated/types/sentence_similarity.py,sha256=w5Nj1g18eBzopZwxuDLI-fEsyaCK2KrHA5yf_XfSjgo,1052
+huggingface_hub/inference/_generated/types/summarization.py,sha256=WGGr8uDLrZg8JQgF9ZMUP9euw6uZo6zwkVZ-IfvCFI0,1487
+huggingface_hub/inference/_generated/types/table_question_answering.py,sha256=cJnIPA2fIbQP2Ejn7X_esY48qGWoXg30fnNOqCXiOVQ,2293
+huggingface_hub/inference/_generated/types/text2text_generation.py,sha256=v-418w1JNNSZ2tuW9DUl6a36TQQCADa438A3ufvcbOw,1609
+huggingface_hub/inference/_generated/types/text_classification.py,sha256=FarAjygLEfPofLfKeabzJ7PKEBItlHGoUNUOzyLRpL4,1445
+huggingface_hub/inference/_generated/types/text_generation.py,sha256=Rk6kAbyWn7tI-tDamkoCAg61sQj3glNPxWdovs6WrQM,5907
+huggingface_hub/inference/_generated/types/text_to_audio.py,sha256=aE6NLpQ9V3ENIXOCFFcMaMjdLxZzZpE7iU1V-XYPU0w,4850
+huggingface_hub/inference/_generated/types/text_to_image.py,sha256=sGGi1Fa0n5Pmd6G3I-F2SBJcJ1M7Gmqnng6sfi0AVzs,1903
+huggingface_hub/inference/_generated/types/text_to_speech.py,sha256=5Md6d1eRBfeVQ4A32s7YoxM2HFfSLMz5B5QovGKfWbs,4869
+huggingface_hub/inference/_generated/types/text_to_video.py,sha256=yHXVNs3t6aYO7visrBlB5cH7kjoysxF9510aofcf_18,1790
+huggingface_hub/inference/_generated/types/token_classification.py,sha256=iblAcgfxXeaLYJ14NdiiCMIQuBlarUknLkXUklhvcLI,1915
+huggingface_hub/inference/_generated/types/translation.py,sha256=xww4X5cfCYv_F0oINWLwqJRPCT6SV3VBAJuPjTs_j7o,1763
+huggingface_hub/inference/_generated/types/video_classification.py,sha256=TyydjQw2NRLK9sDGzJUVnkDeo848ebmCx588Ur8I9q0,1680
+huggingface_hub/inference/_generated/types/visual_question_answering.py,sha256=AWrQ6qo4gZa3PGedaNpzDFqx5yOYyjhnUB6iuZEj_uo,1673
+huggingface_hub/inference/_generated/types/zero_shot_classification.py,sha256=BAiebPjsqoNa8EU35Dx0pfIv8W2c4GSl-TJckV1MaxQ,1738
+huggingface_hub/inference/_generated/types/zero_shot_image_classification.py,sha256=8J9n6VqFARkWvPfAZNWEG70AlrMGldU95EGQQwn06zI,1487
+huggingface_hub/inference/_generated/types/zero_shot_object_detection.py,sha256=GUd81LIV7oEbRWayDlAVgyLmY596r1M3AW0jXDp1yTA,1630
+huggingface_hub/inference/_providers/__init__.py,sha256=Q1hPPQgN3gKTa3NWQSANUBOB3oeCLr4miVQAVaZK8DU,5352
+huggingface_hub/inference/_providers/__pycache__/__init__.cpython-312.pyc,,
+huggingface_hub/inference/_providers/__pycache__/_common.cpython-312.pyc,,
+huggingface_hub/inference/_providers/__pycache__/black_forest_labs.cpython-312.pyc,,
+huggingface_hub/inference/_providers/__pycache__/fal_ai.cpython-312.pyc,,
+huggingface_hub/inference/_providers/__pycache__/fireworks_ai.cpython-312.pyc,,
+huggingface_hub/inference/_providers/__pycache__/hf_inference.cpython-312.pyc,,
+huggingface_hub/inference/_providers/__pycache__/hyperbolic.cpython-312.pyc,,
+huggingface_hub/inference/_providers/__pycache__/nebius.cpython-312.pyc,,
+huggingface_hub/inference/_providers/__pycache__/novita.cpython-312.pyc,,
+huggingface_hub/inference/_providers/__pycache__/replicate.cpython-312.pyc,,
+huggingface_hub/inference/_providers/__pycache__/sambanova.cpython-312.pyc,,
+huggingface_hub/inference/_providers/__pycache__/together.cpython-312.pyc,,
+huggingface_hub/inference/_providers/_common.py,sha256=8mgu95x46aRhvuHOVijczBpRJK4LFHusC_FU3t4iXGw,9200
+huggingface_hub/inference/_providers/black_forest_labs.py,sha256=YacbRSMwTcWMCtNfLZGRnjAwyOLAM9sIj06ZUKDb7n0,2647
+huggingface_hub/inference/_providers/fal_ai.py,sha256=pjWeMfxatAXSVJsEQf142MQVvAz5x-jtZLYXapXJFlI,3455
+huggingface_hub/inference/_providers/fireworks_ai.py,sha256=NazpDeD4agtFW6ISaXEvq5XAPVNeoG9XWk3O4NCxBNI,228
+huggingface_hub/inference/_providers/hf_inference.py,sha256=5CUR4LzPHiHfd5JN3ooP3DbOAyRgEzbQb0ZoaaiiNPY,5183
+huggingface_hub/inference/_providers/hyperbolic.py,sha256=qccC_gcMstGnvjmRyslgnuFVa9VAKS9w6F1ohwysvMU,1739
+huggingface_hub/inference/_providers/nebius.py,sha256=P34BO2y8MdBWqYzzt4VlkPePkXAIbMlRxvV87UhZVdU,1508
+huggingface_hub/inference/_providers/novita.py,sha256=SLOgZuAP1-Zs9NB2JmLf6kgX8R4O1Yy_64Ok9CmEZNs,745
+huggingface_hub/inference/_providers/replicate.py,sha256=5XVbbokgIz431rkIMchxcZgSAMU4vFiJ3xPgF8xyhz8,2263
+huggingface_hub/inference/_providers/sambanova.py,sha256=pR2MajO3ffga9FxzruzrTfTm3eBQ3AC0TPeSIdiQeco,249
+huggingface_hub/inference/_providers/together.py,sha256=HPVx9_pVc-b8PUl_aB1SPCngbfA7QK-tRV7_AzgTD_g,2028
+huggingface_hub/inference_api.py,sha256=b4-NhPSn9b44nYKV8tDKXodmE4JVdEymMWL4CVGkzlE,8323
+huggingface_hub/keras_mixin.py,sha256=3d2oW35SALXHq-WHoLD_tbq0UrcabGKj3HidtPRx51U,19574
+huggingface_hub/lfs.py,sha256=n-TIjK7J7aXG3zi__0nkd6aNkE4djOf9CD6dYQOQ5P8,16649
+huggingface_hub/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
+huggingface_hub/repocard.py,sha256=ihFBKYqPNaWw9rWMUvcaRKxrooL32NA4fAlrwzXk9LY,34733
+huggingface_hub/repocard_data.py,sha256=EqJ-54QF0qngitsZwCkPQjPwzrkLpxt_qU4lxekMWs8,33247
+huggingface_hub/repository.py,sha256=xVQR-MRKNDfJ_Z_99DwtXZB3xNO06eYG_GvRM4fLiTU,54557
+huggingface_hub/serialization/__init__.py,sha256=kn-Fa-m4FzMnN8lNsF-SwFcfzug4CucexybGKyvZ8S0,1041
+huggingface_hub/serialization/__pycache__/__init__.cpython-312.pyc,,
+huggingface_hub/serialization/__pycache__/_base.cpython-312.pyc,,
+huggingface_hub/serialization/__pycache__/_dduf.cpython-312.pyc,,
+huggingface_hub/serialization/__pycache__/_tensorflow.cpython-312.pyc,,
+huggingface_hub/serialization/__pycache__/_torch.cpython-312.pyc,,
+huggingface_hub/serialization/_base.py,sha256=Df3GwGR9NzeK_SD75prXLucJAzPiNPgHbgXSw-_LTk8,8126
+huggingface_hub/serialization/_dduf.py,sha256=s42239rLiHwaJE36QDEmS5GH7DSmQ__BffiHJO5RjIg,15424
+huggingface_hub/serialization/_tensorflow.py,sha256=zHOvEMg-JHC55Fm4roDT3LUCDO5zB9qtXZffG065RAM,3625
+huggingface_hub/serialization/_torch.py,sha256=WoNV_17x99Agx68mNMbi2g8T5CAVIkSb3_OaZx9KrX4,44714
+huggingface_hub/templates/datasetcard_template.md,sha256=W-EMqR6wndbrnZorkVv56URWPG49l7MATGeI015kTvs,5503
+huggingface_hub/templates/modelcard_template.md,sha256=4AqArS3cqdtbit5Bo-DhjcnDFR-pza5hErLLTPM4Yuc,6870
+huggingface_hub/utils/__init__.py,sha256=aMEsiXGi93z-dXz1W7FFma71tAMeKw0SoKVZSQUeE_4,3525
+huggingface_hub/utils/__pycache__/__init__.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/_auth.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/_cache_assets.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/_cache_manager.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/_chunk_utils.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/_datetime.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/_deprecation.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/_experimental.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/_fixes.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/_git_credential.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/_headers.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/_hf_folder.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/_http.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/_lfs.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/_pagination.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/_paths.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/_runtime.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/_safetensors.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/_subprocess.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/_telemetry.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/_typing.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/_validators.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/endpoint_helpers.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/insecure_hashlib.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/logging.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/sha.cpython-312.pyc,,
+huggingface_hub/utils/__pycache__/tqdm.cpython-312.pyc,,
+huggingface_hub/utils/_auth.py,sha256=-9p3SSOtWKMMCDKlsM_-ebsIGX0sSgKTSnC-_O4kTxg,8294
+huggingface_hub/utils/_cache_assets.py,sha256=kai77HPQMfYpROouMBQCr_gdBCaeTm996Sqj0dExbNg,5728
+huggingface_hub/utils/_cache_manager.py,sha256=GhiuVQsEkWU55uYkkgiGJV1_naeciyk8u4qb4WTIVyw,34531
+huggingface_hub/utils/_chunk_utils.py,sha256=kRCaj5228_vKcyLWspd8Xq01f17Jz6ds5Sr9ed5d_RU,2130
+huggingface_hub/utils/_datetime.py,sha256=kCS5jaKV25kOncX1xujbXsz5iDLcjLcLw85semGNzxQ,2770
+huggingface_hub/utils/_deprecation.py,sha256=HZhRGGUX_QMKBBBwHHlffLtmCSK01TOpeXHefZbPfwI,4872
+huggingface_hub/utils/_experimental.py,sha256=crCPH6k6-11wwH2GZuZzZzZbjUotay49ywV1SSJhMHM,2395
+huggingface_hub/utils/_fixes.py,sha256=xQV1QkUn2WpLqLjtXNiyn9gh-454K6AF-Q3kwkYAQD8,4437
+huggingface_hub/utils/_git_credential.py,sha256=SDdsiREr1TcAR2Ze2TB0E5cYzVJgvDZrs60od9lAsMc,4596
+huggingface_hub/utils/_headers.py,sha256=3tKQN5ciAt1683nZXEpPyQOS7oWnfYI0t_N_aJU-bms,8876
+huggingface_hub/utils/_hf_folder.py,sha256=WNjTnu0Q7tqcSS9EsP4ssCJrrJMcCvAt8P_-LEtmOU8,2487
+huggingface_hub/utils/_http.py,sha256=Nf4_Rpo9iqgOdrwwxjkZPAecfEGxdcGZ4w8Zb_qeesw,25301
+huggingface_hub/utils/_lfs.py,sha256=EC0Oz6Wiwl8foRNkUOzrETXzAWlbgpnpxo5a410ovFY,3957
+huggingface_hub/utils/_pagination.py,sha256=hzLFLd8i_DKkPRVYzOx2CxLt5lcocEiAxDJriQUjAjY,1841
+huggingface_hub/utils/_paths.py,sha256=w1ZhFmmD5ykWjp_hAvhjtOoa2ZUcOXJrF4a6O3QpAWo,5042
+huggingface_hub/utils/_runtime.py,sha256=tUyWylDgqaOXnMg39rvyusiruVN5ulcqiSwUEkQ9jjg,11195
+huggingface_hub/utils/_safetensors.py,sha256=GW3nyv7xQcuwObKYeYoT9VhURVzG1DZTbKBKho8Bbos,4458
+huggingface_hub/utils/_subprocess.py,sha256=6GpGD4qE9-Z1-Ocs3JuCLjR4NcRlknA-hAuQlqiprYY,4595
+huggingface_hub/utils/_telemetry.py,sha256=54LXeIJU5pEGghPAh06gqNAR-UoxOjVLvKqAQscwqZs,4890
+huggingface_hub/utils/_typing.py,sha256=Dgp6TQUlpzStfVLoSvXHCBP4b3NzHZ8E0Gg9mYAoDS4,2903
+huggingface_hub/utils/_validators.py,sha256=dDsVG31iooTYrIyi5Vwr1DukL0fEmJwu3ceVNduhsuE,9204
+huggingface_hub/utils/endpoint_helpers.py,sha256=9VtIAlxQ5H_4y30sjCAgbu7XCqAtNLC7aRYxaNn0hLI,2366
+huggingface_hub/utils/insecure_hashlib.py,sha256=OjxlvtSQHpbLp9PWSrXBDJ0wHjxCBU-SQJgucEEXDbU,1058
+huggingface_hub/utils/logging.py,sha256=0A8fF1yh3L9Ka_bCDX2ml4U5Ht0tY8Dr3JcbRvWFuwo,4909
+huggingface_hub/utils/sha.py,sha256=OFnNGCba0sNcT2gUwaVCJnldxlltrHHe0DS_PCpV3C4,2134
+huggingface_hub/utils/tqdm.py,sha256=ZgdphuTnwAIaUKnnD2P7qVvNHpzHAyrYoItkiV0aEjQ,9835
diff --git a/env/Lib/site-packages/huggingface_hub-0.29.1.dist-info/REQUESTED b/env/Lib/site-packages/huggingface_hub-0.29.1.dist-info/REQUESTED
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/env/Lib/site-packages/huggingface_hub-0.29.1.dist-info/WHEEL b/env/Lib/site-packages/huggingface_hub-0.29.1.dist-info/WHEEL
new file mode 100644
index 0000000000000000000000000000000000000000..79d5c89a71989389294854aa34e329701325f8b0
--- /dev/null
+++ b/env/Lib/site-packages/huggingface_hub-0.29.1.dist-info/WHEEL
@@ -0,0 +1,5 @@
+Wheel-Version: 1.0
+Generator: bdist_wheel (0.45.1)
+Root-Is-Purelib: true
+Tag: py3-none-any
+
diff --git a/env/Lib/site-packages/huggingface_hub-0.29.1.dist-info/entry_points.txt b/env/Lib/site-packages/huggingface_hub-0.29.1.dist-info/entry_points.txt
new file mode 100644
index 0000000000000000000000000000000000000000..eb3dafd90f19de60b3e520aeaf8132402980214d
--- /dev/null
+++ b/env/Lib/site-packages/huggingface_hub-0.29.1.dist-info/entry_points.txt
@@ -0,0 +1,6 @@
+[console_scripts]
+huggingface-cli = huggingface_hub.commands.huggingface_cli:main
+
+[fsspec.specs]
+hf=huggingface_hub.HfFileSystem
+
diff --git a/env/Lib/site-packages/huggingface_hub-0.29.1.dist-info/top_level.txt b/env/Lib/site-packages/huggingface_hub-0.29.1.dist-info/top_level.txt
new file mode 100644
index 0000000000000000000000000000000000000000..6b964ccca3c1b6766042b3fe3b2707ba25372924
--- /dev/null
+++ b/env/Lib/site-packages/huggingface_hub-0.29.1.dist-info/top_level.txt
@@ -0,0 +1 @@
+huggingface_hub
diff --git a/env/Lib/site-packages/huggingface_hub/__init__.py b/env/Lib/site-packages/huggingface_hub/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..b322d99eaaca686f61a1aef292f78fa52163c491
--- /dev/null
+++ b/env/Lib/site-packages/huggingface_hub/__init__.py
@@ -0,0 +1,1431 @@
+# Copyright 2020 The HuggingFace Team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# ***********
+# `huggingface_hub` init has 2 modes:
+# - Normal usage:
+# If imported to use it, all modules and functions are lazy-loaded. This means
+# they exist at top level in module but are imported only the first time they are
+# used. This way, `from huggingface_hub import something` will import `something`
+# quickly without the hassle of importing all the features from `huggingface_hub`.
+# - Static check:
+# If statically analyzed, all modules and functions are loaded normally. This way
+# static typing check works properly as well as autocomplete in text editors and
+# IDEs.
+#
+# The static model imports are done inside the `if TYPE_CHECKING:` statement at
+# the bottom of this file. Since module/functions imports are duplicated, it is
+# mandatory to make sure to add them twice when adding one. This is checked in the
+# `make quality` command.
+#
+# To update the static imports, please run the following command and commit the changes.
+# ```
+# # Use script
+# python utils/check_static_imports.py --update-file
+#
+# # Or run style on codebase
+# make style
+# ```
+#
+# ***********
+# Lazy loader vendored from https://github.com/scientific-python/lazy_loader
+import importlib
+import os
+import sys
+from typing import TYPE_CHECKING
+
+
+__version__ = "0.29.1"
+
+# Alphabetical order of definitions is ensured in tests
+# WARNING: any comment added in this dictionary definition will be lost when
+# re-generating the file !
+_SUBMOD_ATTRS = {
+ "_commit_scheduler": [
+ "CommitScheduler",
+ ],
+ "_inference_endpoints": [
+ "InferenceEndpoint",
+ "InferenceEndpointError",
+ "InferenceEndpointStatus",
+ "InferenceEndpointTimeoutError",
+ "InferenceEndpointType",
+ ],
+ "_login": [
+ "auth_list",
+ "auth_switch",
+ "interpreter_login",
+ "login",
+ "logout",
+ "notebook_login",
+ ],
+ "_snapshot_download": [
+ "snapshot_download",
+ ],
+ "_space_api": [
+ "SpaceHardware",
+ "SpaceRuntime",
+ "SpaceStage",
+ "SpaceStorage",
+ "SpaceVariable",
+ ],
+ "_tensorboard_logger": [
+ "HFSummaryWriter",
+ ],
+ "_webhooks_payload": [
+ "WebhookPayload",
+ "WebhookPayloadComment",
+ "WebhookPayloadDiscussion",
+ "WebhookPayloadDiscussionChanges",
+ "WebhookPayloadEvent",
+ "WebhookPayloadMovedTo",
+ "WebhookPayloadRepo",
+ "WebhookPayloadUrl",
+ "WebhookPayloadWebhook",
+ ],
+ "_webhooks_server": [
+ "WebhooksServer",
+ "webhook_endpoint",
+ ],
+ "community": [
+ "Discussion",
+ "DiscussionComment",
+ "DiscussionCommit",
+ "DiscussionEvent",
+ "DiscussionStatusChange",
+ "DiscussionTitleChange",
+ "DiscussionWithDetails",
+ ],
+ "constants": [
+ "CONFIG_NAME",
+ "FLAX_WEIGHTS_NAME",
+ "HUGGINGFACE_CO_URL_HOME",
+ "HUGGINGFACE_CO_URL_TEMPLATE",
+ "PYTORCH_WEIGHTS_NAME",
+ "REPO_TYPE_DATASET",
+ "REPO_TYPE_MODEL",
+ "REPO_TYPE_SPACE",
+ "TF2_WEIGHTS_NAME",
+ "TF_WEIGHTS_NAME",
+ ],
+ "fastai_utils": [
+ "_save_pretrained_fastai",
+ "from_pretrained_fastai",
+ "push_to_hub_fastai",
+ ],
+ "file_download": [
+ "HfFileMetadata",
+ "_CACHED_NO_EXIST",
+ "get_hf_file_metadata",
+ "hf_hub_download",
+ "hf_hub_url",
+ "try_to_load_from_cache",
+ ],
+ "hf_api": [
+ "Collection",
+ "CollectionItem",
+ "CommitInfo",
+ "CommitOperation",
+ "CommitOperationAdd",
+ "CommitOperationCopy",
+ "CommitOperationDelete",
+ "DatasetInfo",
+ "GitCommitInfo",
+ "GitRefInfo",
+ "GitRefs",
+ "HfApi",
+ "ModelInfo",
+ "RepoUrl",
+ "SpaceInfo",
+ "User",
+ "UserLikes",
+ "WebhookInfo",
+ "WebhookWatchedItem",
+ "accept_access_request",
+ "add_collection_item",
+ "add_space_secret",
+ "add_space_variable",
+ "auth_check",
+ "cancel_access_request",
+ "change_discussion_status",
+ "comment_discussion",
+ "create_branch",
+ "create_collection",
+ "create_commit",
+ "create_discussion",
+ "create_inference_endpoint",
+ "create_pull_request",
+ "create_repo",
+ "create_tag",
+ "create_webhook",
+ "dataset_info",
+ "delete_branch",
+ "delete_collection",
+ "delete_collection_item",
+ "delete_file",
+ "delete_folder",
+ "delete_inference_endpoint",
+ "delete_repo",
+ "delete_space_secret",
+ "delete_space_storage",
+ "delete_space_variable",
+ "delete_tag",
+ "delete_webhook",
+ "disable_webhook",
+ "duplicate_space",
+ "edit_discussion_comment",
+ "enable_webhook",
+ "file_exists",
+ "get_collection",
+ "get_dataset_tags",
+ "get_discussion_details",
+ "get_full_repo_name",
+ "get_inference_endpoint",
+ "get_model_tags",
+ "get_paths_info",
+ "get_repo_discussions",
+ "get_safetensors_metadata",
+ "get_space_runtime",
+ "get_space_variables",
+ "get_token_permission",
+ "get_user_overview",
+ "get_webhook",
+ "grant_access",
+ "list_accepted_access_requests",
+ "list_collections",
+ "list_datasets",
+ "list_inference_endpoints",
+ "list_liked_repos",
+ "list_models",
+ "list_organization_members",
+ "list_papers",
+ "list_pending_access_requests",
+ "list_rejected_access_requests",
+ "list_repo_commits",
+ "list_repo_files",
+ "list_repo_likers",
+ "list_repo_refs",
+ "list_repo_tree",
+ "list_spaces",
+ "list_user_followers",
+ "list_user_following",
+ "list_webhooks",
+ "merge_pull_request",
+ "model_info",
+ "move_repo",
+ "paper_info",
+ "parse_safetensors_file_metadata",
+ "pause_inference_endpoint",
+ "pause_space",
+ "preupload_lfs_files",
+ "reject_access_request",
+ "rename_discussion",
+ "repo_exists",
+ "repo_info",
+ "repo_type_and_id_from_hf_id",
+ "request_space_hardware",
+ "request_space_storage",
+ "restart_space",
+ "resume_inference_endpoint",
+ "revision_exists",
+ "run_as_future",
+ "scale_to_zero_inference_endpoint",
+ "set_space_sleep_time",
+ "space_info",
+ "super_squash_history",
+ "unlike",
+ "update_collection_item",
+ "update_collection_metadata",
+ "update_inference_endpoint",
+ "update_repo_settings",
+ "update_repo_visibility",
+ "update_webhook",
+ "upload_file",
+ "upload_folder",
+ "upload_large_folder",
+ "whoami",
+ ],
+ "hf_file_system": [
+ "HfFileSystem",
+ "HfFileSystemFile",
+ "HfFileSystemResolvedPath",
+ "HfFileSystemStreamFile",
+ ],
+ "hub_mixin": [
+ "ModelHubMixin",
+ "PyTorchModelHubMixin",
+ ],
+ "inference._client": [
+ "InferenceClient",
+ "InferenceTimeoutError",
+ ],
+ "inference._generated._async_client": [
+ "AsyncInferenceClient",
+ ],
+ "inference._generated.types": [
+ "AudioClassificationInput",
+ "AudioClassificationOutputElement",
+ "AudioClassificationOutputTransform",
+ "AudioClassificationParameters",
+ "AudioToAudioInput",
+ "AudioToAudioOutputElement",
+ "AutomaticSpeechRecognitionEarlyStoppingEnum",
+ "AutomaticSpeechRecognitionGenerationParameters",
+ "AutomaticSpeechRecognitionInput",
+ "AutomaticSpeechRecognitionOutput",
+ "AutomaticSpeechRecognitionOutputChunk",
+ "AutomaticSpeechRecognitionParameters",
+ "ChatCompletionInput",
+ "ChatCompletionInputFunctionDefinition",
+ "ChatCompletionInputFunctionName",
+ "ChatCompletionInputGrammarType",
+ "ChatCompletionInputGrammarTypeType",
+ "ChatCompletionInputMessage",
+ "ChatCompletionInputMessageChunk",
+ "ChatCompletionInputMessageChunkType",
+ "ChatCompletionInputStreamOptions",
+ "ChatCompletionInputTool",
+ "ChatCompletionInputToolChoiceClass",
+ "ChatCompletionInputToolChoiceEnum",
+ "ChatCompletionInputURL",
+ "ChatCompletionOutput",
+ "ChatCompletionOutputComplete",
+ "ChatCompletionOutputFunctionDefinition",
+ "ChatCompletionOutputLogprob",
+ "ChatCompletionOutputLogprobs",
+ "ChatCompletionOutputMessage",
+ "ChatCompletionOutputToolCall",
+ "ChatCompletionOutputTopLogprob",
+ "ChatCompletionOutputUsage",
+ "ChatCompletionStreamOutput",
+ "ChatCompletionStreamOutputChoice",
+ "ChatCompletionStreamOutputDelta",
+ "ChatCompletionStreamOutputDeltaToolCall",
+ "ChatCompletionStreamOutputFunction",
+ "ChatCompletionStreamOutputLogprob",
+ "ChatCompletionStreamOutputLogprobs",
+ "ChatCompletionStreamOutputTopLogprob",
+ "ChatCompletionStreamOutputUsage",
+ "DepthEstimationInput",
+ "DepthEstimationOutput",
+ "DocumentQuestionAnsweringInput",
+ "DocumentQuestionAnsweringInputData",
+ "DocumentQuestionAnsweringOutputElement",
+ "DocumentQuestionAnsweringParameters",
+ "FeatureExtractionInput",
+ "FeatureExtractionInputTruncationDirection",
+ "FillMaskInput",
+ "FillMaskOutputElement",
+ "FillMaskParameters",
+ "ImageClassificationInput",
+ "ImageClassificationOutputElement",
+ "ImageClassificationOutputTransform",
+ "ImageClassificationParameters",
+ "ImageSegmentationInput",
+ "ImageSegmentationOutputElement",
+ "ImageSegmentationParameters",
+ "ImageSegmentationSubtask",
+ "ImageToImageInput",
+ "ImageToImageOutput",
+ "ImageToImageParameters",
+ "ImageToImageTargetSize",
+ "ImageToTextEarlyStoppingEnum",
+ "ImageToTextGenerationParameters",
+ "ImageToTextInput",
+ "ImageToTextOutput",
+ "ImageToTextParameters",
+ "ObjectDetectionBoundingBox",
+ "ObjectDetectionInput",
+ "ObjectDetectionOutputElement",
+ "ObjectDetectionParameters",
+ "Padding",
+ "QuestionAnsweringInput",
+ "QuestionAnsweringInputData",
+ "QuestionAnsweringOutputElement",
+ "QuestionAnsweringParameters",
+ "SentenceSimilarityInput",
+ "SentenceSimilarityInputData",
+ "SummarizationInput",
+ "SummarizationOutput",
+ "SummarizationParameters",
+ "SummarizationTruncationStrategy",
+ "TableQuestionAnsweringInput",
+ "TableQuestionAnsweringInputData",
+ "TableQuestionAnsweringOutputElement",
+ "TableQuestionAnsweringParameters",
+ "Text2TextGenerationInput",
+ "Text2TextGenerationOutput",
+ "Text2TextGenerationParameters",
+ "Text2TextGenerationTruncationStrategy",
+ "TextClassificationInput",
+ "TextClassificationOutputElement",
+ "TextClassificationOutputTransform",
+ "TextClassificationParameters",
+ "TextGenerationInput",
+ "TextGenerationInputGenerateParameters",
+ "TextGenerationInputGrammarType",
+ "TextGenerationOutput",
+ "TextGenerationOutputBestOfSequence",
+ "TextGenerationOutputDetails",
+ "TextGenerationOutputFinishReason",
+ "TextGenerationOutputPrefillToken",
+ "TextGenerationOutputToken",
+ "TextGenerationStreamOutput",
+ "TextGenerationStreamOutputStreamDetails",
+ "TextGenerationStreamOutputToken",
+ "TextToAudioEarlyStoppingEnum",
+ "TextToAudioGenerationParameters",
+ "TextToAudioInput",
+ "TextToAudioOutput",
+ "TextToAudioParameters",
+ "TextToImageInput",
+ "TextToImageOutput",
+ "TextToImageParameters",
+ "TextToSpeechEarlyStoppingEnum",
+ "TextToSpeechGenerationParameters",
+ "TextToSpeechInput",
+ "TextToSpeechOutput",
+ "TextToSpeechParameters",
+ "TextToVideoInput",
+ "TextToVideoOutput",
+ "TextToVideoParameters",
+ "TokenClassificationAggregationStrategy",
+ "TokenClassificationInput",
+ "TokenClassificationOutputElement",
+ "TokenClassificationParameters",
+ "TranslationInput",
+ "TranslationOutput",
+ "TranslationParameters",
+ "TranslationTruncationStrategy",
+ "TypeEnum",
+ "VideoClassificationInput",
+ "VideoClassificationOutputElement",
+ "VideoClassificationOutputTransform",
+ "VideoClassificationParameters",
+ "VisualQuestionAnsweringInput",
+ "VisualQuestionAnsweringInputData",
+ "VisualQuestionAnsweringOutputElement",
+ "VisualQuestionAnsweringParameters",
+ "ZeroShotClassificationInput",
+ "ZeroShotClassificationOutputElement",
+ "ZeroShotClassificationParameters",
+ "ZeroShotImageClassificationInput",
+ "ZeroShotImageClassificationOutputElement",
+ "ZeroShotImageClassificationParameters",
+ "ZeroShotObjectDetectionBoundingBox",
+ "ZeroShotObjectDetectionInput",
+ "ZeroShotObjectDetectionOutputElement",
+ "ZeroShotObjectDetectionParameters",
+ ],
+ "inference_api": [
+ "InferenceApi",
+ ],
+ "keras_mixin": [
+ "KerasModelHubMixin",
+ "from_pretrained_keras",
+ "push_to_hub_keras",
+ "save_pretrained_keras",
+ ],
+ "repocard": [
+ "DatasetCard",
+ "ModelCard",
+ "RepoCard",
+ "SpaceCard",
+ "metadata_eval_result",
+ "metadata_load",
+ "metadata_save",
+ "metadata_update",
+ ],
+ "repocard_data": [
+ "CardData",
+ "DatasetCardData",
+ "EvalResult",
+ "ModelCardData",
+ "SpaceCardData",
+ ],
+ "repository": [
+ "Repository",
+ ],
+ "serialization": [
+ "StateDictSplit",
+ "get_tf_storage_size",
+ "get_torch_storage_id",
+ "get_torch_storage_size",
+ "load_state_dict_from_file",
+ "load_torch_model",
+ "save_torch_model",
+ "save_torch_state_dict",
+ "split_state_dict_into_shards_factory",
+ "split_tf_state_dict_into_shards",
+ "split_torch_state_dict_into_shards",
+ ],
+ "serialization._dduf": [
+ "DDUFEntry",
+ "export_entries_as_dduf",
+ "export_folder_as_dduf",
+ "read_dduf_file",
+ ],
+ "utils": [
+ "CacheNotFound",
+ "CachedFileInfo",
+ "CachedRepoInfo",
+ "CachedRevisionInfo",
+ "CorruptedCacheException",
+ "DeleteCacheStrategy",
+ "HFCacheInfo",
+ "HfFolder",
+ "cached_assets_path",
+ "configure_http_backend",
+ "dump_environment_info",
+ "get_session",
+ "get_token",
+ "logging",
+ "scan_cache_dir",
+ ],
+}
+
+# WARNING: __all__ is generated automatically, Any manual edit will be lost when re-generating this file !
+#
+# To update the static imports, please run the following command and commit the changes.
+# ```
+# # Use script
+# python utils/check_all_variable.py --update
+#
+# # Or run style on codebase
+# make style
+# ```
+
+__all__ = [
+ "AsyncInferenceClient",
+ "AudioClassificationInput",
+ "AudioClassificationOutputElement",
+ "AudioClassificationOutputTransform",
+ "AudioClassificationParameters",
+ "AudioToAudioInput",
+ "AudioToAudioOutputElement",
+ "AutomaticSpeechRecognitionEarlyStoppingEnum",
+ "AutomaticSpeechRecognitionGenerationParameters",
+ "AutomaticSpeechRecognitionInput",
+ "AutomaticSpeechRecognitionOutput",
+ "AutomaticSpeechRecognitionOutputChunk",
+ "AutomaticSpeechRecognitionParameters",
+ "CONFIG_NAME",
+ "CacheNotFound",
+ "CachedFileInfo",
+ "CachedRepoInfo",
+ "CachedRevisionInfo",
+ "CardData",
+ "ChatCompletionInput",
+ "ChatCompletionInputFunctionDefinition",
+ "ChatCompletionInputFunctionName",
+ "ChatCompletionInputGrammarType",
+ "ChatCompletionInputGrammarTypeType",
+ "ChatCompletionInputMessage",
+ "ChatCompletionInputMessageChunk",
+ "ChatCompletionInputMessageChunkType",
+ "ChatCompletionInputStreamOptions",
+ "ChatCompletionInputTool",
+ "ChatCompletionInputToolChoiceClass",
+ "ChatCompletionInputToolChoiceEnum",
+ "ChatCompletionInputURL",
+ "ChatCompletionOutput",
+ "ChatCompletionOutputComplete",
+ "ChatCompletionOutputFunctionDefinition",
+ "ChatCompletionOutputLogprob",
+ "ChatCompletionOutputLogprobs",
+ "ChatCompletionOutputMessage",
+ "ChatCompletionOutputToolCall",
+ "ChatCompletionOutputTopLogprob",
+ "ChatCompletionOutputUsage",
+ "ChatCompletionStreamOutput",
+ "ChatCompletionStreamOutputChoice",
+ "ChatCompletionStreamOutputDelta",
+ "ChatCompletionStreamOutputDeltaToolCall",
+ "ChatCompletionStreamOutputFunction",
+ "ChatCompletionStreamOutputLogprob",
+ "ChatCompletionStreamOutputLogprobs",
+ "ChatCompletionStreamOutputTopLogprob",
+ "ChatCompletionStreamOutputUsage",
+ "Collection",
+ "CollectionItem",
+ "CommitInfo",
+ "CommitOperation",
+ "CommitOperationAdd",
+ "CommitOperationCopy",
+ "CommitOperationDelete",
+ "CommitScheduler",
+ "CorruptedCacheException",
+ "DDUFEntry",
+ "DatasetCard",
+ "DatasetCardData",
+ "DatasetInfo",
+ "DeleteCacheStrategy",
+ "DepthEstimationInput",
+ "DepthEstimationOutput",
+ "Discussion",
+ "DiscussionComment",
+ "DiscussionCommit",
+ "DiscussionEvent",
+ "DiscussionStatusChange",
+ "DiscussionTitleChange",
+ "DiscussionWithDetails",
+ "DocumentQuestionAnsweringInput",
+ "DocumentQuestionAnsweringInputData",
+ "DocumentQuestionAnsweringOutputElement",
+ "DocumentQuestionAnsweringParameters",
+ "EvalResult",
+ "FLAX_WEIGHTS_NAME",
+ "FeatureExtractionInput",
+ "FeatureExtractionInputTruncationDirection",
+ "FillMaskInput",
+ "FillMaskOutputElement",
+ "FillMaskParameters",
+ "GitCommitInfo",
+ "GitRefInfo",
+ "GitRefs",
+ "HFCacheInfo",
+ "HFSummaryWriter",
+ "HUGGINGFACE_CO_URL_HOME",
+ "HUGGINGFACE_CO_URL_TEMPLATE",
+ "HfApi",
+ "HfFileMetadata",
+ "HfFileSystem",
+ "HfFileSystemFile",
+ "HfFileSystemResolvedPath",
+ "HfFileSystemStreamFile",
+ "HfFolder",
+ "ImageClassificationInput",
+ "ImageClassificationOutputElement",
+ "ImageClassificationOutputTransform",
+ "ImageClassificationParameters",
+ "ImageSegmentationInput",
+ "ImageSegmentationOutputElement",
+ "ImageSegmentationParameters",
+ "ImageSegmentationSubtask",
+ "ImageToImageInput",
+ "ImageToImageOutput",
+ "ImageToImageParameters",
+ "ImageToImageTargetSize",
+ "ImageToTextEarlyStoppingEnum",
+ "ImageToTextGenerationParameters",
+ "ImageToTextInput",
+ "ImageToTextOutput",
+ "ImageToTextParameters",
+ "InferenceApi",
+ "InferenceClient",
+ "InferenceEndpoint",
+ "InferenceEndpointError",
+ "InferenceEndpointStatus",
+ "InferenceEndpointTimeoutError",
+ "InferenceEndpointType",
+ "InferenceTimeoutError",
+ "KerasModelHubMixin",
+ "ModelCard",
+ "ModelCardData",
+ "ModelHubMixin",
+ "ModelInfo",
+ "ObjectDetectionBoundingBox",
+ "ObjectDetectionInput",
+ "ObjectDetectionOutputElement",
+ "ObjectDetectionParameters",
+ "PYTORCH_WEIGHTS_NAME",
+ "Padding",
+ "PyTorchModelHubMixin",
+ "QuestionAnsweringInput",
+ "QuestionAnsweringInputData",
+ "QuestionAnsweringOutputElement",
+ "QuestionAnsweringParameters",
+ "REPO_TYPE_DATASET",
+ "REPO_TYPE_MODEL",
+ "REPO_TYPE_SPACE",
+ "RepoCard",
+ "RepoUrl",
+ "Repository",
+ "SentenceSimilarityInput",
+ "SentenceSimilarityInputData",
+ "SpaceCard",
+ "SpaceCardData",
+ "SpaceHardware",
+ "SpaceInfo",
+ "SpaceRuntime",
+ "SpaceStage",
+ "SpaceStorage",
+ "SpaceVariable",
+ "StateDictSplit",
+ "SummarizationInput",
+ "SummarizationOutput",
+ "SummarizationParameters",
+ "SummarizationTruncationStrategy",
+ "TF2_WEIGHTS_NAME",
+ "TF_WEIGHTS_NAME",
+ "TableQuestionAnsweringInput",
+ "TableQuestionAnsweringInputData",
+ "TableQuestionAnsweringOutputElement",
+ "TableQuestionAnsweringParameters",
+ "Text2TextGenerationInput",
+ "Text2TextGenerationOutput",
+ "Text2TextGenerationParameters",
+ "Text2TextGenerationTruncationStrategy",
+ "TextClassificationInput",
+ "TextClassificationOutputElement",
+ "TextClassificationOutputTransform",
+ "TextClassificationParameters",
+ "TextGenerationInput",
+ "TextGenerationInputGenerateParameters",
+ "TextGenerationInputGrammarType",
+ "TextGenerationOutput",
+ "TextGenerationOutputBestOfSequence",
+ "TextGenerationOutputDetails",
+ "TextGenerationOutputFinishReason",
+ "TextGenerationOutputPrefillToken",
+ "TextGenerationOutputToken",
+ "TextGenerationStreamOutput",
+ "TextGenerationStreamOutputStreamDetails",
+ "TextGenerationStreamOutputToken",
+ "TextToAudioEarlyStoppingEnum",
+ "TextToAudioGenerationParameters",
+ "TextToAudioInput",
+ "TextToAudioOutput",
+ "TextToAudioParameters",
+ "TextToImageInput",
+ "TextToImageOutput",
+ "TextToImageParameters",
+ "TextToSpeechEarlyStoppingEnum",
+ "TextToSpeechGenerationParameters",
+ "TextToSpeechInput",
+ "TextToSpeechOutput",
+ "TextToSpeechParameters",
+ "TextToVideoInput",
+ "TextToVideoOutput",
+ "TextToVideoParameters",
+ "TokenClassificationAggregationStrategy",
+ "TokenClassificationInput",
+ "TokenClassificationOutputElement",
+ "TokenClassificationParameters",
+ "TranslationInput",
+ "TranslationOutput",
+ "TranslationParameters",
+ "TranslationTruncationStrategy",
+ "TypeEnum",
+ "User",
+ "UserLikes",
+ "VideoClassificationInput",
+ "VideoClassificationOutputElement",
+ "VideoClassificationOutputTransform",
+ "VideoClassificationParameters",
+ "VisualQuestionAnsweringInput",
+ "VisualQuestionAnsweringInputData",
+ "VisualQuestionAnsweringOutputElement",
+ "VisualQuestionAnsweringParameters",
+ "WebhookInfo",
+ "WebhookPayload",
+ "WebhookPayloadComment",
+ "WebhookPayloadDiscussion",
+ "WebhookPayloadDiscussionChanges",
+ "WebhookPayloadEvent",
+ "WebhookPayloadMovedTo",
+ "WebhookPayloadRepo",
+ "WebhookPayloadUrl",
+ "WebhookPayloadWebhook",
+ "WebhookWatchedItem",
+ "WebhooksServer",
+ "ZeroShotClassificationInput",
+ "ZeroShotClassificationOutputElement",
+ "ZeroShotClassificationParameters",
+ "ZeroShotImageClassificationInput",
+ "ZeroShotImageClassificationOutputElement",
+ "ZeroShotImageClassificationParameters",
+ "ZeroShotObjectDetectionBoundingBox",
+ "ZeroShotObjectDetectionInput",
+ "ZeroShotObjectDetectionOutputElement",
+ "ZeroShotObjectDetectionParameters",
+ "_CACHED_NO_EXIST",
+ "_save_pretrained_fastai",
+ "accept_access_request",
+ "add_collection_item",
+ "add_space_secret",
+ "add_space_variable",
+ "auth_check",
+ "auth_list",
+ "auth_switch",
+ "cached_assets_path",
+ "cancel_access_request",
+ "change_discussion_status",
+ "comment_discussion",
+ "configure_http_backend",
+ "create_branch",
+ "create_collection",
+ "create_commit",
+ "create_discussion",
+ "create_inference_endpoint",
+ "create_pull_request",
+ "create_repo",
+ "create_tag",
+ "create_webhook",
+ "dataset_info",
+ "delete_branch",
+ "delete_collection",
+ "delete_collection_item",
+ "delete_file",
+ "delete_folder",
+ "delete_inference_endpoint",
+ "delete_repo",
+ "delete_space_secret",
+ "delete_space_storage",
+ "delete_space_variable",
+ "delete_tag",
+ "delete_webhook",
+ "disable_webhook",
+ "dump_environment_info",
+ "duplicate_space",
+ "edit_discussion_comment",
+ "enable_webhook",
+ "export_entries_as_dduf",
+ "export_folder_as_dduf",
+ "file_exists",
+ "from_pretrained_fastai",
+ "from_pretrained_keras",
+ "get_collection",
+ "get_dataset_tags",
+ "get_discussion_details",
+ "get_full_repo_name",
+ "get_hf_file_metadata",
+ "get_inference_endpoint",
+ "get_model_tags",
+ "get_paths_info",
+ "get_repo_discussions",
+ "get_safetensors_metadata",
+ "get_session",
+ "get_space_runtime",
+ "get_space_variables",
+ "get_tf_storage_size",
+ "get_token",
+ "get_token_permission",
+ "get_torch_storage_id",
+ "get_torch_storage_size",
+ "get_user_overview",
+ "get_webhook",
+ "grant_access",
+ "hf_hub_download",
+ "hf_hub_url",
+ "interpreter_login",
+ "list_accepted_access_requests",
+ "list_collections",
+ "list_datasets",
+ "list_inference_endpoints",
+ "list_liked_repos",
+ "list_models",
+ "list_organization_members",
+ "list_papers",
+ "list_pending_access_requests",
+ "list_rejected_access_requests",
+ "list_repo_commits",
+ "list_repo_files",
+ "list_repo_likers",
+ "list_repo_refs",
+ "list_repo_tree",
+ "list_spaces",
+ "list_user_followers",
+ "list_user_following",
+ "list_webhooks",
+ "load_state_dict_from_file",
+ "load_torch_model",
+ "logging",
+ "login",
+ "logout",
+ "merge_pull_request",
+ "metadata_eval_result",
+ "metadata_load",
+ "metadata_save",
+ "metadata_update",
+ "model_info",
+ "move_repo",
+ "notebook_login",
+ "paper_info",
+ "parse_safetensors_file_metadata",
+ "pause_inference_endpoint",
+ "pause_space",
+ "preupload_lfs_files",
+ "push_to_hub_fastai",
+ "push_to_hub_keras",
+ "read_dduf_file",
+ "reject_access_request",
+ "rename_discussion",
+ "repo_exists",
+ "repo_info",
+ "repo_type_and_id_from_hf_id",
+ "request_space_hardware",
+ "request_space_storage",
+ "restart_space",
+ "resume_inference_endpoint",
+ "revision_exists",
+ "run_as_future",
+ "save_pretrained_keras",
+ "save_torch_model",
+ "save_torch_state_dict",
+ "scale_to_zero_inference_endpoint",
+ "scan_cache_dir",
+ "set_space_sleep_time",
+ "snapshot_download",
+ "space_info",
+ "split_state_dict_into_shards_factory",
+ "split_tf_state_dict_into_shards",
+ "split_torch_state_dict_into_shards",
+ "super_squash_history",
+ "try_to_load_from_cache",
+ "unlike",
+ "update_collection_item",
+ "update_collection_metadata",
+ "update_inference_endpoint",
+ "update_repo_settings",
+ "update_repo_visibility",
+ "update_webhook",
+ "upload_file",
+ "upload_folder",
+ "upload_large_folder",
+ "webhook_endpoint",
+ "whoami",
+]
+
+
+def _attach(package_name, submodules=None, submod_attrs=None):
+ """Attach lazily loaded submodules, functions, or other attributes.
+
+ Typically, modules import submodules and attributes as follows:
+
+ ```py
+ import mysubmodule
+ import anothersubmodule
+
+ from .foo import someattr
+ ```
+
+ The idea is to replace a package's `__getattr__`, `__dir__`, such that all imports
+ work exactly the way they would with normal imports, except that the import occurs
+ upon first use.
+
+ The typical way to call this function, replacing the above imports, is:
+
+ ```python
+ __getattr__, __dir__ = lazy.attach(
+ __name__,
+ ['mysubmodule', 'anothersubmodule'],
+ {'foo': ['someattr']}
+ )
+ ```
+ This functionality requires Python 3.7 or higher.
+
+ Args:
+ package_name (`str`):
+ Typically use `__name__`.
+ submodules (`set`):
+ List of submodules to attach.
+ submod_attrs (`dict`):
+ Dictionary of submodule -> list of attributes / functions.
+ These attributes are imported as they are used.
+
+ Returns:
+ __getattr__, __dir__, __all__
+
+ """
+ if submod_attrs is None:
+ submod_attrs = {}
+
+ if submodules is None:
+ submodules = set()
+ else:
+ submodules = set(submodules)
+
+ attr_to_modules = {attr: mod for mod, attrs in submod_attrs.items() for attr in attrs}
+
+ def __getattr__(name):
+ if name in submodules:
+ try:
+ return importlib.import_module(f"{package_name}.{name}")
+ except Exception as e:
+ print(f"Error importing {package_name}.{name}: {e}")
+ raise
+ elif name in attr_to_modules:
+ submod_path = f"{package_name}.{attr_to_modules[name]}"
+ try:
+ submod = importlib.import_module(submod_path)
+ except Exception as e:
+ print(f"Error importing {submod_path}: {e}")
+ raise
+ attr = getattr(submod, name)
+
+ # If the attribute lives in a file (module) with the same
+ # name as the attribute, ensure that the attribute and *not*
+ # the module is accessible on the package.
+ if name == attr_to_modules[name]:
+ pkg = sys.modules[package_name]
+ pkg.__dict__[name] = attr
+
+ return attr
+ else:
+ raise AttributeError(f"No {package_name} attribute {name}")
+
+ def __dir__():
+ return __all__
+
+ return __getattr__, __dir__
+
+
+__getattr__, __dir__ = _attach(__name__, submodules=[], submod_attrs=_SUBMOD_ATTRS)
+
+if os.environ.get("EAGER_IMPORT", ""):
+ for attr in __all__:
+ __getattr__(attr)
+
+# WARNING: any content below this statement is generated automatically. Any manual edit
+# will be lost when re-generating this file !
+#
+# To update the static imports, please run the following command and commit the changes.
+# ```
+# # Use script
+# python utils/check_static_imports.py --update
+#
+# # Or run style on codebase
+# make style
+# ```
+if TYPE_CHECKING: # pragma: no cover
+ from ._commit_scheduler import CommitScheduler # noqa: F401
+ from ._inference_endpoints import (
+ InferenceEndpoint, # noqa: F401
+ InferenceEndpointError, # noqa: F401
+ InferenceEndpointStatus, # noqa: F401
+ InferenceEndpointTimeoutError, # noqa: F401
+ InferenceEndpointType, # noqa: F401
+ )
+ from ._login import (
+ auth_list, # noqa: F401
+ auth_switch, # noqa: F401
+ interpreter_login, # noqa: F401
+ login, # noqa: F401
+ logout, # noqa: F401
+ notebook_login, # noqa: F401
+ )
+ from ._snapshot_download import snapshot_download # noqa: F401
+ from ._space_api import (
+ SpaceHardware, # noqa: F401
+ SpaceRuntime, # noqa: F401
+ SpaceStage, # noqa: F401
+ SpaceStorage, # noqa: F401
+ SpaceVariable, # noqa: F401
+ )
+ from ._tensorboard_logger import HFSummaryWriter # noqa: F401
+ from ._webhooks_payload import (
+ WebhookPayload, # noqa: F401
+ WebhookPayloadComment, # noqa: F401
+ WebhookPayloadDiscussion, # noqa: F401
+ WebhookPayloadDiscussionChanges, # noqa: F401
+ WebhookPayloadEvent, # noqa: F401
+ WebhookPayloadMovedTo, # noqa: F401
+ WebhookPayloadRepo, # noqa: F401
+ WebhookPayloadUrl, # noqa: F401
+ WebhookPayloadWebhook, # noqa: F401
+ )
+ from ._webhooks_server import (
+ WebhooksServer, # noqa: F401
+ webhook_endpoint, # noqa: F401
+ )
+ from .community import (
+ Discussion, # noqa: F401
+ DiscussionComment, # noqa: F401
+ DiscussionCommit, # noqa: F401
+ DiscussionEvent, # noqa: F401
+ DiscussionStatusChange, # noqa: F401
+ DiscussionTitleChange, # noqa: F401
+ DiscussionWithDetails, # noqa: F401
+ )
+ from .constants import (
+ CONFIG_NAME, # noqa: F401
+ FLAX_WEIGHTS_NAME, # noqa: F401
+ HUGGINGFACE_CO_URL_HOME, # noqa: F401
+ HUGGINGFACE_CO_URL_TEMPLATE, # noqa: F401
+ PYTORCH_WEIGHTS_NAME, # noqa: F401
+ REPO_TYPE_DATASET, # noqa: F401
+ REPO_TYPE_MODEL, # noqa: F401
+ REPO_TYPE_SPACE, # noqa: F401
+ TF2_WEIGHTS_NAME, # noqa: F401
+ TF_WEIGHTS_NAME, # noqa: F401
+ )
+ from .fastai_utils import (
+ _save_pretrained_fastai, # noqa: F401
+ from_pretrained_fastai, # noqa: F401
+ push_to_hub_fastai, # noqa: F401
+ )
+ from .file_download import (
+ _CACHED_NO_EXIST, # noqa: F401
+ HfFileMetadata, # noqa: F401
+ get_hf_file_metadata, # noqa: F401
+ hf_hub_download, # noqa: F401
+ hf_hub_url, # noqa: F401
+ try_to_load_from_cache, # noqa: F401
+ )
+ from .hf_api import (
+ Collection, # noqa: F401
+ CollectionItem, # noqa: F401
+ CommitInfo, # noqa: F401
+ CommitOperation, # noqa: F401
+ CommitOperationAdd, # noqa: F401
+ CommitOperationCopy, # noqa: F401
+ CommitOperationDelete, # noqa: F401
+ DatasetInfo, # noqa: F401
+ GitCommitInfo, # noqa: F401
+ GitRefInfo, # noqa: F401
+ GitRefs, # noqa: F401
+ HfApi, # noqa: F401
+ ModelInfo, # noqa: F401
+ RepoUrl, # noqa: F401
+ SpaceInfo, # noqa: F401
+ User, # noqa: F401
+ UserLikes, # noqa: F401
+ WebhookInfo, # noqa: F401
+ WebhookWatchedItem, # noqa: F401
+ accept_access_request, # noqa: F401
+ add_collection_item, # noqa: F401
+ add_space_secret, # noqa: F401
+ add_space_variable, # noqa: F401
+ auth_check, # noqa: F401
+ cancel_access_request, # noqa: F401
+ change_discussion_status, # noqa: F401
+ comment_discussion, # noqa: F401
+ create_branch, # noqa: F401
+ create_collection, # noqa: F401
+ create_commit, # noqa: F401
+ create_discussion, # noqa: F401
+ create_inference_endpoint, # noqa: F401
+ create_pull_request, # noqa: F401
+ create_repo, # noqa: F401
+ create_tag, # noqa: F401
+ create_webhook, # noqa: F401
+ dataset_info, # noqa: F401
+ delete_branch, # noqa: F401
+ delete_collection, # noqa: F401
+ delete_collection_item, # noqa: F401
+ delete_file, # noqa: F401
+ delete_folder, # noqa: F401
+ delete_inference_endpoint, # noqa: F401
+ delete_repo, # noqa: F401
+ delete_space_secret, # noqa: F401
+ delete_space_storage, # noqa: F401
+ delete_space_variable, # noqa: F401
+ delete_tag, # noqa: F401
+ delete_webhook, # noqa: F401
+ disable_webhook, # noqa: F401
+ duplicate_space, # noqa: F401
+ edit_discussion_comment, # noqa: F401
+ enable_webhook, # noqa: F401
+ file_exists, # noqa: F401
+ get_collection, # noqa: F401
+ get_dataset_tags, # noqa: F401
+ get_discussion_details, # noqa: F401
+ get_full_repo_name, # noqa: F401
+ get_inference_endpoint, # noqa: F401
+ get_model_tags, # noqa: F401
+ get_paths_info, # noqa: F401
+ get_repo_discussions, # noqa: F401
+ get_safetensors_metadata, # noqa: F401
+ get_space_runtime, # noqa: F401
+ get_space_variables, # noqa: F401
+ get_token_permission, # noqa: F401
+ get_user_overview, # noqa: F401
+ get_webhook, # noqa: F401
+ grant_access, # noqa: F401
+ list_accepted_access_requests, # noqa: F401
+ list_collections, # noqa: F401
+ list_datasets, # noqa: F401
+ list_inference_endpoints, # noqa: F401
+ list_liked_repos, # noqa: F401
+ list_models, # noqa: F401
+ list_organization_members, # noqa: F401
+ list_papers, # noqa: F401
+ list_pending_access_requests, # noqa: F401
+ list_rejected_access_requests, # noqa: F401
+ list_repo_commits, # noqa: F401
+ list_repo_files, # noqa: F401
+ list_repo_likers, # noqa: F401
+ list_repo_refs, # noqa: F401
+ list_repo_tree, # noqa: F401
+ list_spaces, # noqa: F401
+ list_user_followers, # noqa: F401
+ list_user_following, # noqa: F401
+ list_webhooks, # noqa: F401
+ merge_pull_request, # noqa: F401
+ model_info, # noqa: F401
+ move_repo, # noqa: F401
+ paper_info, # noqa: F401
+ parse_safetensors_file_metadata, # noqa: F401
+ pause_inference_endpoint, # noqa: F401
+ pause_space, # noqa: F401
+ preupload_lfs_files, # noqa: F401
+ reject_access_request, # noqa: F401
+ rename_discussion, # noqa: F401
+ repo_exists, # noqa: F401
+ repo_info, # noqa: F401
+ repo_type_and_id_from_hf_id, # noqa: F401
+ request_space_hardware, # noqa: F401
+ request_space_storage, # noqa: F401
+ restart_space, # noqa: F401
+ resume_inference_endpoint, # noqa: F401
+ revision_exists, # noqa: F401
+ run_as_future, # noqa: F401
+ scale_to_zero_inference_endpoint, # noqa: F401
+ set_space_sleep_time, # noqa: F401
+ space_info, # noqa: F401
+ super_squash_history, # noqa: F401
+ unlike, # noqa: F401
+ update_collection_item, # noqa: F401
+ update_collection_metadata, # noqa: F401
+ update_inference_endpoint, # noqa: F401
+ update_repo_settings, # noqa: F401
+ update_repo_visibility, # noqa: F401
+ update_webhook, # noqa: F401
+ upload_file, # noqa: F401
+ upload_folder, # noqa: F401
+ upload_large_folder, # noqa: F401
+ whoami, # noqa: F401
+ )
+ from .hf_file_system import (
+ HfFileSystem, # noqa: F401
+ HfFileSystemFile, # noqa: F401
+ HfFileSystemResolvedPath, # noqa: F401
+ HfFileSystemStreamFile, # noqa: F401
+ )
+ from .hub_mixin import (
+ ModelHubMixin, # noqa: F401
+ PyTorchModelHubMixin, # noqa: F401
+ )
+ from .inference._client import (
+ InferenceClient, # noqa: F401
+ InferenceTimeoutError, # noqa: F401
+ )
+ from .inference._generated._async_client import AsyncInferenceClient # noqa: F401
+ from .inference._generated.types import (
+ AudioClassificationInput, # noqa: F401
+ AudioClassificationOutputElement, # noqa: F401
+ AudioClassificationOutputTransform, # noqa: F401
+ AudioClassificationParameters, # noqa: F401
+ AudioToAudioInput, # noqa: F401
+ AudioToAudioOutputElement, # noqa: F401
+ AutomaticSpeechRecognitionEarlyStoppingEnum, # noqa: F401
+ AutomaticSpeechRecognitionGenerationParameters, # noqa: F401
+ AutomaticSpeechRecognitionInput, # noqa: F401
+ AutomaticSpeechRecognitionOutput, # noqa: F401
+ AutomaticSpeechRecognitionOutputChunk, # noqa: F401
+ AutomaticSpeechRecognitionParameters, # noqa: F401
+ ChatCompletionInput, # noqa: F401
+ ChatCompletionInputFunctionDefinition, # noqa: F401
+ ChatCompletionInputFunctionName, # noqa: F401
+ ChatCompletionInputGrammarType, # noqa: F401
+ ChatCompletionInputGrammarTypeType, # noqa: F401
+ ChatCompletionInputMessage, # noqa: F401
+ ChatCompletionInputMessageChunk, # noqa: F401
+ ChatCompletionInputMessageChunkType, # noqa: F401
+ ChatCompletionInputStreamOptions, # noqa: F401
+ ChatCompletionInputTool, # noqa: F401
+ ChatCompletionInputToolChoiceClass, # noqa: F401
+ ChatCompletionInputToolChoiceEnum, # noqa: F401
+ ChatCompletionInputURL, # noqa: F401
+ ChatCompletionOutput, # noqa: F401
+ ChatCompletionOutputComplete, # noqa: F401
+ ChatCompletionOutputFunctionDefinition, # noqa: F401
+ ChatCompletionOutputLogprob, # noqa: F401
+ ChatCompletionOutputLogprobs, # noqa: F401
+ ChatCompletionOutputMessage, # noqa: F401
+ ChatCompletionOutputToolCall, # noqa: F401
+ ChatCompletionOutputTopLogprob, # noqa: F401
+ ChatCompletionOutputUsage, # noqa: F401
+ ChatCompletionStreamOutput, # noqa: F401
+ ChatCompletionStreamOutputChoice, # noqa: F401
+ ChatCompletionStreamOutputDelta, # noqa: F401
+ ChatCompletionStreamOutputDeltaToolCall, # noqa: F401
+ ChatCompletionStreamOutputFunction, # noqa: F401
+ ChatCompletionStreamOutputLogprob, # noqa: F401
+ ChatCompletionStreamOutputLogprobs, # noqa: F401
+ ChatCompletionStreamOutputTopLogprob, # noqa: F401
+ ChatCompletionStreamOutputUsage, # noqa: F401
+ DepthEstimationInput, # noqa: F401
+ DepthEstimationOutput, # noqa: F401
+ DocumentQuestionAnsweringInput, # noqa: F401
+ DocumentQuestionAnsweringInputData, # noqa: F401
+ DocumentQuestionAnsweringOutputElement, # noqa: F401
+ DocumentQuestionAnsweringParameters, # noqa: F401
+ FeatureExtractionInput, # noqa: F401
+ FeatureExtractionInputTruncationDirection, # noqa: F401
+ FillMaskInput, # noqa: F401
+ FillMaskOutputElement, # noqa: F401
+ FillMaskParameters, # noqa: F401
+ ImageClassificationInput, # noqa: F401
+ ImageClassificationOutputElement, # noqa: F401
+ ImageClassificationOutputTransform, # noqa: F401
+ ImageClassificationParameters, # noqa: F401
+ ImageSegmentationInput, # noqa: F401
+ ImageSegmentationOutputElement, # noqa: F401
+ ImageSegmentationParameters, # noqa: F401
+ ImageSegmentationSubtask, # noqa: F401
+ ImageToImageInput, # noqa: F401
+ ImageToImageOutput, # noqa: F401
+ ImageToImageParameters, # noqa: F401
+ ImageToImageTargetSize, # noqa: F401
+ ImageToTextEarlyStoppingEnum, # noqa: F401
+ ImageToTextGenerationParameters, # noqa: F401
+ ImageToTextInput, # noqa: F401
+ ImageToTextOutput, # noqa: F401
+ ImageToTextParameters, # noqa: F401
+ ObjectDetectionBoundingBox, # noqa: F401
+ ObjectDetectionInput, # noqa: F401
+ ObjectDetectionOutputElement, # noqa: F401
+ ObjectDetectionParameters, # noqa: F401
+ Padding, # noqa: F401
+ QuestionAnsweringInput, # noqa: F401
+ QuestionAnsweringInputData, # noqa: F401
+ QuestionAnsweringOutputElement, # noqa: F401
+ QuestionAnsweringParameters, # noqa: F401
+ SentenceSimilarityInput, # noqa: F401
+ SentenceSimilarityInputData, # noqa: F401
+ SummarizationInput, # noqa: F401
+ SummarizationOutput, # noqa: F401
+ SummarizationParameters, # noqa: F401
+ SummarizationTruncationStrategy, # noqa: F401
+ TableQuestionAnsweringInput, # noqa: F401
+ TableQuestionAnsweringInputData, # noqa: F401
+ TableQuestionAnsweringOutputElement, # noqa: F401
+ TableQuestionAnsweringParameters, # noqa: F401
+ Text2TextGenerationInput, # noqa: F401
+ Text2TextGenerationOutput, # noqa: F401
+ Text2TextGenerationParameters, # noqa: F401
+ Text2TextGenerationTruncationStrategy, # noqa: F401
+ TextClassificationInput, # noqa: F401
+ TextClassificationOutputElement, # noqa: F401
+ TextClassificationOutputTransform, # noqa: F401
+ TextClassificationParameters, # noqa: F401
+ TextGenerationInput, # noqa: F401
+ TextGenerationInputGenerateParameters, # noqa: F401
+ TextGenerationInputGrammarType, # noqa: F401
+ TextGenerationOutput, # noqa: F401
+ TextGenerationOutputBestOfSequence, # noqa: F401
+ TextGenerationOutputDetails, # noqa: F401
+ TextGenerationOutputFinishReason, # noqa: F401
+ TextGenerationOutputPrefillToken, # noqa: F401
+ TextGenerationOutputToken, # noqa: F401
+ TextGenerationStreamOutput, # noqa: F401
+ TextGenerationStreamOutputStreamDetails, # noqa: F401
+ TextGenerationStreamOutputToken, # noqa: F401
+ TextToAudioEarlyStoppingEnum, # noqa: F401
+ TextToAudioGenerationParameters, # noqa: F401
+ TextToAudioInput, # noqa: F401
+ TextToAudioOutput, # noqa: F401
+ TextToAudioParameters, # noqa: F401
+ TextToImageInput, # noqa: F401
+ TextToImageOutput, # noqa: F401
+ TextToImageParameters, # noqa: F401
+ TextToSpeechEarlyStoppingEnum, # noqa: F401
+ TextToSpeechGenerationParameters, # noqa: F401
+ TextToSpeechInput, # noqa: F401
+ TextToSpeechOutput, # noqa: F401
+ TextToSpeechParameters, # noqa: F401
+ TextToVideoInput, # noqa: F401
+ TextToVideoOutput, # noqa: F401
+ TextToVideoParameters, # noqa: F401
+ TokenClassificationAggregationStrategy, # noqa: F401
+ TokenClassificationInput, # noqa: F401
+ TokenClassificationOutputElement, # noqa: F401
+ TokenClassificationParameters, # noqa: F401
+ TranslationInput, # noqa: F401
+ TranslationOutput, # noqa: F401
+ TranslationParameters, # noqa: F401
+ TranslationTruncationStrategy, # noqa: F401
+ TypeEnum, # noqa: F401
+ VideoClassificationInput, # noqa: F401
+ VideoClassificationOutputElement, # noqa: F401
+ VideoClassificationOutputTransform, # noqa: F401
+ VideoClassificationParameters, # noqa: F401
+ VisualQuestionAnsweringInput, # noqa: F401
+ VisualQuestionAnsweringInputData, # noqa: F401
+ VisualQuestionAnsweringOutputElement, # noqa: F401
+ VisualQuestionAnsweringParameters, # noqa: F401
+ ZeroShotClassificationInput, # noqa: F401
+ ZeroShotClassificationOutputElement, # noqa: F401
+ ZeroShotClassificationParameters, # noqa: F401
+ ZeroShotImageClassificationInput, # noqa: F401
+ ZeroShotImageClassificationOutputElement, # noqa: F401
+ ZeroShotImageClassificationParameters, # noqa: F401
+ ZeroShotObjectDetectionBoundingBox, # noqa: F401
+ ZeroShotObjectDetectionInput, # noqa: F401
+ ZeroShotObjectDetectionOutputElement, # noqa: F401
+ ZeroShotObjectDetectionParameters, # noqa: F401
+ )
+ from .inference_api import InferenceApi # noqa: F401
+ from .keras_mixin import (
+ KerasModelHubMixin, # noqa: F401
+ from_pretrained_keras, # noqa: F401
+ push_to_hub_keras, # noqa: F401
+ save_pretrained_keras, # noqa: F401
+ )
+ from .repocard import (
+ DatasetCard, # noqa: F401
+ ModelCard, # noqa: F401
+ RepoCard, # noqa: F401
+ SpaceCard, # noqa: F401
+ metadata_eval_result, # noqa: F401
+ metadata_load, # noqa: F401
+ metadata_save, # noqa: F401
+ metadata_update, # noqa: F401
+ )
+ from .repocard_data import (
+ CardData, # noqa: F401
+ DatasetCardData, # noqa: F401
+ EvalResult, # noqa: F401
+ ModelCardData, # noqa: F401
+ SpaceCardData, # noqa: F401
+ )
+ from .repository import Repository # noqa: F401
+ from .serialization import (
+ StateDictSplit, # noqa: F401
+ get_tf_storage_size, # noqa: F401
+ get_torch_storage_id, # noqa: F401
+ get_torch_storage_size, # noqa: F401
+ load_state_dict_from_file, # noqa: F401
+ load_torch_model, # noqa: F401
+ save_torch_model, # noqa: F401
+ save_torch_state_dict, # noqa: F401
+ split_state_dict_into_shards_factory, # noqa: F401
+ split_tf_state_dict_into_shards, # noqa: F401
+ split_torch_state_dict_into_shards, # noqa: F401
+ )
+ from .serialization._dduf import (
+ DDUFEntry, # noqa: F401
+ export_entries_as_dduf, # noqa: F401
+ export_folder_as_dduf, # noqa: F401
+ read_dduf_file, # noqa: F401
+ )
+ from .utils import (
+ CachedFileInfo, # noqa: F401
+ CachedRepoInfo, # noqa: F401
+ CachedRevisionInfo, # noqa: F401
+ CacheNotFound, # noqa: F401
+ CorruptedCacheException, # noqa: F401
+ DeleteCacheStrategy, # noqa: F401
+ HFCacheInfo, # noqa: F401
+ HfFolder, # noqa: F401
+ cached_assets_path, # noqa: F401
+ configure_http_backend, # noqa: F401
+ dump_environment_info, # noqa: F401
+ get_session, # noqa: F401
+ get_token, # noqa: F401
+ logging, # noqa: F401
+ scan_cache_dir, # noqa: F401
+ )
diff --git a/env/Lib/site-packages/huggingface_hub/_commit_api.py b/env/Lib/site-packages/huggingface_hub/_commit_api.py
new file mode 100644
index 0000000000000000000000000000000000000000..783a3d2e3fdf2301000a6088e02ba74742a87454
--- /dev/null
+++ b/env/Lib/site-packages/huggingface_hub/_commit_api.py
@@ -0,0 +1,758 @@
+"""
+Type definitions and utilities for the `create_commit` API
+"""
+
+import base64
+import io
+import os
+import warnings
+from collections import defaultdict
+from contextlib import contextmanager
+from dataclasses import dataclass, field
+from itertools import groupby
+from pathlib import Path, PurePosixPath
+from typing import TYPE_CHECKING, Any, BinaryIO, Dict, Iterable, Iterator, List, Literal, Optional, Tuple, Union
+
+from tqdm.contrib.concurrent import thread_map
+
+from . import constants
+from .errors import EntryNotFoundError
+from .file_download import hf_hub_url
+from .lfs import UploadInfo, lfs_upload, post_lfs_batch_info
+from .utils import (
+ FORBIDDEN_FOLDERS,
+ chunk_iterable,
+ get_session,
+ hf_raise_for_status,
+ logging,
+ sha,
+ tqdm_stream_file,
+ validate_hf_hub_args,
+)
+from .utils import tqdm as hf_tqdm
+
+
+if TYPE_CHECKING:
+ from .hf_api import RepoFile
+
+
+logger = logging.get_logger(__name__)
+
+
+UploadMode = Literal["lfs", "regular"]
+
+# Max is 1,000 per request on the Hub for HfApi.get_paths_info
+# Otherwise we get:
+# HfHubHTTPError: 413 Client Error: Payload Too Large for url: https://huggingface.co/api/datasets/xxx (Request ID: xxx)\n\ntoo many parameters
+# See https://github.com/huggingface/huggingface_hub/issues/1503
+FETCH_LFS_BATCH_SIZE = 500
+
+
+@dataclass
+class CommitOperationDelete:
+ """
+ Data structure holding necessary info to delete a file or a folder from a repository
+ on the Hub.
+
+ Args:
+ path_in_repo (`str`):
+ Relative filepath in the repo, for example: `"checkpoints/1fec34a/weights.bin"`
+ for a file or `"checkpoints/1fec34a/"` for a folder.
+ is_folder (`bool` or `Literal["auto"]`, *optional*)
+ Whether the Delete Operation applies to a folder or not. If "auto", the path
+ type (file or folder) is guessed automatically by looking if path ends with
+ a "/" (folder) or not (file). To explicitly set the path type, you can set
+ `is_folder=True` or `is_folder=False`.
+ """
+
+ path_in_repo: str
+ is_folder: Union[bool, Literal["auto"]] = "auto"
+
+ def __post_init__(self):
+ self.path_in_repo = _validate_path_in_repo(self.path_in_repo)
+
+ if self.is_folder == "auto":
+ self.is_folder = self.path_in_repo.endswith("/")
+ if not isinstance(self.is_folder, bool):
+ raise ValueError(
+ f"Wrong value for `is_folder`. Must be one of [`True`, `False`, `'auto'`]. Got '{self.is_folder}'."
+ )
+
+
+@dataclass
+class CommitOperationCopy:
+ """
+ Data structure holding necessary info to copy a file in a repository on the Hub.
+
+ Limitations:
+ - Only LFS files can be copied. To copy a regular file, you need to download it locally and re-upload it
+ - Cross-repository copies are not supported.
+
+ Note: you can combine a [`CommitOperationCopy`] and a [`CommitOperationDelete`] to rename an LFS file on the Hub.
+
+ Args:
+ src_path_in_repo (`str`):
+ Relative filepath in the repo of the file to be copied, e.g. `"checkpoints/1fec34a/weights.bin"`.
+ path_in_repo (`str`):
+ Relative filepath in the repo where to copy the file, e.g. `"checkpoints/1fec34a/weights_copy.bin"`.
+ src_revision (`str`, *optional*):
+ The git revision of the file to be copied. Can be any valid git revision.
+ Default to the target commit revision.
+ """
+
+ src_path_in_repo: str
+ path_in_repo: str
+ src_revision: Optional[str] = None
+ # set to the OID of the file to be copied if it has already been uploaded
+ # useful to determine if a commit will be empty or not.
+ _src_oid: Optional[str] = None
+ # set to the OID of the file to copy to if it has already been uploaded
+ # useful to determine if a commit will be empty or not.
+ _dest_oid: Optional[str] = None
+
+ def __post_init__(self):
+ self.src_path_in_repo = _validate_path_in_repo(self.src_path_in_repo)
+ self.path_in_repo = _validate_path_in_repo(self.path_in_repo)
+
+
+@dataclass
+class CommitOperationAdd:
+ """
+ Data structure holding necessary info to upload a file to a repository on the Hub.
+
+ Args:
+ path_in_repo (`str`):
+ Relative filepath in the repo, for example: `"checkpoints/1fec34a/weights.bin"`
+ path_or_fileobj (`str`, `Path`, `bytes`, or `BinaryIO`):
+ Either:
+ - a path to a local file (as `str` or `pathlib.Path`) to upload
+ - a buffer of bytes (`bytes`) holding the content of the file to upload
+ - a "file object" (subclass of `io.BufferedIOBase`), typically obtained
+ with `open(path, "rb")`. It must support `seek()` and `tell()` methods.
+
+ Raises:
+ [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
+ If `path_or_fileobj` is not one of `str`, `Path`, `bytes` or `io.BufferedIOBase`.
+ [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
+ If `path_or_fileobj` is a `str` or `Path` but not a path to an existing file.
+ [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
+ If `path_or_fileobj` is a `io.BufferedIOBase` but it doesn't support both
+ `seek()` and `tell()`.
+ """
+
+ path_in_repo: str
+ path_or_fileobj: Union[str, Path, bytes, BinaryIO]
+ upload_info: UploadInfo = field(init=False, repr=False)
+
+ # Internal attributes
+
+ # set to "lfs" or "regular" once known
+ _upload_mode: Optional[UploadMode] = field(init=False, repr=False, default=None)
+
+ # set to True if .gitignore rules prevent the file from being uploaded as LFS
+ # (server-side check)
+ _should_ignore: Optional[bool] = field(init=False, repr=False, default=None)
+
+ # set to the remote OID of the file if it has already been uploaded
+ # useful to determine if a commit will be empty or not
+ _remote_oid: Optional[str] = field(init=False, repr=False, default=None)
+
+ # set to True once the file has been uploaded as LFS
+ _is_uploaded: bool = field(init=False, repr=False, default=False)
+
+ # set to True once the file has been committed
+ _is_committed: bool = field(init=False, repr=False, default=False)
+
+ def __post_init__(self) -> None:
+ """Validates `path_or_fileobj` and compute `upload_info`."""
+ self.path_in_repo = _validate_path_in_repo(self.path_in_repo)
+
+ # Validate `path_or_fileobj` value
+ if isinstance(self.path_or_fileobj, Path):
+ self.path_or_fileobj = str(self.path_or_fileobj)
+ if isinstance(self.path_or_fileobj, str):
+ path_or_fileobj = os.path.normpath(os.path.expanduser(self.path_or_fileobj))
+ if not os.path.isfile(path_or_fileobj):
+ raise ValueError(f"Provided path: '{path_or_fileobj}' is not a file on the local file system")
+ elif not isinstance(self.path_or_fileobj, (io.BufferedIOBase, bytes)):
+ # ^^ Inspired from: https://stackoverflow.com/questions/44584829/how-to-determine-if-file-is-opened-in-binary-or-text-mode
+ raise ValueError(
+ "path_or_fileobj must be either an instance of str, bytes or"
+ " io.BufferedIOBase. If you passed a file-like object, make sure it is"
+ " in binary mode."
+ )
+ if isinstance(self.path_or_fileobj, io.BufferedIOBase):
+ try:
+ self.path_or_fileobj.tell()
+ self.path_or_fileobj.seek(0, os.SEEK_CUR)
+ except (OSError, AttributeError) as exc:
+ raise ValueError(
+ "path_or_fileobj is a file-like object but does not implement seek() and tell()"
+ ) from exc
+
+ # Compute "upload_info" attribute
+ if isinstance(self.path_or_fileobj, str):
+ self.upload_info = UploadInfo.from_path(self.path_or_fileobj)
+ elif isinstance(self.path_or_fileobj, bytes):
+ self.upload_info = UploadInfo.from_bytes(self.path_or_fileobj)
+ else:
+ self.upload_info = UploadInfo.from_fileobj(self.path_or_fileobj)
+
+ @contextmanager
+ def as_file(self, with_tqdm: bool = False) -> Iterator[BinaryIO]:
+ """
+ A context manager that yields a file-like object allowing to read the underlying
+ data behind `path_or_fileobj`.
+
+ Args:
+ with_tqdm (`bool`, *optional*, defaults to `False`):
+ If True, iterating over the file object will display a progress bar. Only
+ works if the file-like object is a path to a file. Pure bytes and buffers
+ are not supported.
+
+ Example:
+
+ ```python
+ >>> operation = CommitOperationAdd(
+ ... path_in_repo="remote/dir/weights.h5",
+ ... path_or_fileobj="./local/weights.h5",
+ ... )
+ CommitOperationAdd(path_in_repo='remote/dir/weights.h5', path_or_fileobj='./local/weights.h5')
+
+ >>> with operation.as_file() as file:
+ ... content = file.read()
+
+ >>> with operation.as_file(with_tqdm=True) as file:
+ ... while True:
+ ... data = file.read(1024)
+ ... if not data:
+ ... break
+ config.json: 100%|█████████████████████████| 8.19k/8.19k [00:02<00:00, 3.72kB/s]
+
+ >>> with operation.as_file(with_tqdm=True) as file:
+ ... requests.put(..., data=file)
+ config.json: 100%|█████████████████████████| 8.19k/8.19k [00:02<00:00, 3.72kB/s]
+ ```
+ """
+ if isinstance(self.path_or_fileobj, str) or isinstance(self.path_or_fileobj, Path):
+ if with_tqdm:
+ with tqdm_stream_file(self.path_or_fileobj) as file:
+ yield file
+ else:
+ with open(self.path_or_fileobj, "rb") as file:
+ yield file
+ elif isinstance(self.path_or_fileobj, bytes):
+ yield io.BytesIO(self.path_or_fileobj)
+ elif isinstance(self.path_or_fileobj, io.BufferedIOBase):
+ prev_pos = self.path_or_fileobj.tell()
+ yield self.path_or_fileobj
+ self.path_or_fileobj.seek(prev_pos, io.SEEK_SET)
+
+ def b64content(self) -> bytes:
+ """
+ The base64-encoded content of `path_or_fileobj`
+
+ Returns: `bytes`
+ """
+ with self.as_file() as file:
+ return base64.b64encode(file.read())
+
+ @property
+ def _local_oid(self) -> Optional[str]:
+ """Return the OID of the local file.
+
+ This OID is then compared to `self._remote_oid` to check if the file has changed compared to the remote one.
+ If the file did not change, we won't upload it again to prevent empty commits.
+
+ For LFS files, the OID corresponds to the SHA256 of the file content (used a LFS ref).
+ For regular files, the OID corresponds to the SHA1 of the file content.
+ Note: this is slightly different to git OID computation since the oid of an LFS file is usually the git-SHA1 of the
+ pointer file content (not the actual file content). However, using the SHA256 is enough to detect changes
+ and more convenient client-side.
+ """
+ if self._upload_mode is None:
+ return None
+ elif self._upload_mode == "lfs":
+ return self.upload_info.sha256.hex()
+ else:
+ # Regular file => compute sha1
+ # => no need to read by chunk since the file is guaranteed to be <=5MB.
+ with self.as_file() as file:
+ return sha.git_hash(file.read())
+
+
+def _validate_path_in_repo(path_in_repo: str) -> str:
+ # Validate `path_in_repo` value to prevent a server-side issue
+ if path_in_repo.startswith("/"):
+ path_in_repo = path_in_repo[1:]
+ if path_in_repo == "." or path_in_repo == ".." or path_in_repo.startswith("../"):
+ raise ValueError(f"Invalid `path_in_repo` in CommitOperation: '{path_in_repo}'")
+ if path_in_repo.startswith("./"):
+ path_in_repo = path_in_repo[2:]
+ for forbidden in FORBIDDEN_FOLDERS:
+ if any(part == forbidden for part in path_in_repo.split("/")):
+ raise ValueError(
+ f"Invalid `path_in_repo` in CommitOperation: cannot update files under a '{forbidden}/' folder (path:"
+ f" '{path_in_repo}')."
+ )
+ return path_in_repo
+
+
+CommitOperation = Union[CommitOperationAdd, CommitOperationCopy, CommitOperationDelete]
+
+
+def _warn_on_overwriting_operations(operations: List[CommitOperation]) -> None:
+ """
+ Warn user when a list of operations is expected to overwrite itself in a single
+ commit.
+
+ Rules:
+ - If a filepath is updated by multiple `CommitOperationAdd` operations, a warning
+ message is triggered.
+ - If a filepath is updated at least once by a `CommitOperationAdd` and then deleted
+ by a `CommitOperationDelete`, a warning is triggered.
+ - If a `CommitOperationDelete` deletes a filepath that is then updated by a
+ `CommitOperationAdd`, no warning is triggered. This is usually useless (no need to
+ delete before upload) but can happen if a user deletes an entire folder and then
+ add new files to it.
+ """
+ nb_additions_per_path: Dict[str, int] = defaultdict(int)
+ for operation in operations:
+ path_in_repo = operation.path_in_repo
+ if isinstance(operation, CommitOperationAdd):
+ if nb_additions_per_path[path_in_repo] > 0:
+ warnings.warn(
+ "About to update multiple times the same file in the same commit:"
+ f" '{path_in_repo}'. This can cause undesired inconsistencies in"
+ " your repo."
+ )
+ nb_additions_per_path[path_in_repo] += 1
+ for parent in PurePosixPath(path_in_repo).parents:
+ # Also keep track of number of updated files per folder
+ # => warns if deleting a folder overwrite some contained files
+ nb_additions_per_path[str(parent)] += 1
+ if isinstance(operation, CommitOperationDelete):
+ if nb_additions_per_path[str(PurePosixPath(path_in_repo))] > 0:
+ if operation.is_folder:
+ warnings.warn(
+ "About to delete a folder containing files that have just been"
+ f" updated within the same commit: '{path_in_repo}'. This can"
+ " cause undesired inconsistencies in your repo."
+ )
+ else:
+ warnings.warn(
+ "About to delete a file that have just been updated within the"
+ f" same commit: '{path_in_repo}'. This can cause undesired"
+ " inconsistencies in your repo."
+ )
+
+
+@validate_hf_hub_args
+def _upload_lfs_files(
+ *,
+ additions: List[CommitOperationAdd],
+ repo_type: str,
+ repo_id: str,
+ headers: Dict[str, str],
+ endpoint: Optional[str] = None,
+ num_threads: int = 5,
+ revision: Optional[str] = None,
+):
+ """
+ Uploads the content of `additions` to the Hub using the large file storage protocol.
+
+ Relevant external documentation:
+ - LFS Batch API: https://github.com/git-lfs/git-lfs/blob/main/docs/api/batch.md
+
+ Args:
+ additions (`List` of `CommitOperationAdd`):
+ The files to be uploaded
+ repo_type (`str`):
+ Type of the repo to upload to: `"model"`, `"dataset"` or `"space"`.
+ repo_id (`str`):
+ A namespace (user or an organization) and a repo name separated
+ by a `/`.
+ headers (`Dict[str, str]`):
+ Headers to use for the request, including authorization headers and user agent.
+ num_threads (`int`, *optional*):
+ The number of concurrent threads to use when uploading. Defaults to 5.
+ revision (`str`, *optional*):
+ The git revision to upload to.
+
+ Raises:
+ [`EnvironmentError`](https://docs.python.org/3/library/exceptions.html#EnvironmentError)
+ If an upload failed for any reason
+ [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
+ If the server returns malformed responses
+ [`HTTPError`](https://requests.readthedocs.io/en/latest/api/#requests.HTTPError)
+ If the LFS batch endpoint returned an HTTP error.
+ """
+ # Step 1: retrieve upload instructions from the LFS batch endpoint.
+ # Upload instructions are retrieved by chunk of 256 files to avoid reaching
+ # the payload limit.
+ batch_actions: List[Dict] = []
+ for chunk in chunk_iterable(additions, chunk_size=256):
+ batch_actions_chunk, batch_errors_chunk = post_lfs_batch_info(
+ upload_infos=[op.upload_info for op in chunk],
+ repo_id=repo_id,
+ repo_type=repo_type,
+ revision=revision,
+ endpoint=endpoint,
+ headers=headers,
+ token=None, # already passed in 'headers'
+ )
+
+ # If at least 1 error, we do not retrieve information for other chunks
+ if batch_errors_chunk:
+ message = "\n".join(
+ [
+ f"Encountered error for file with OID {err.get('oid')}: `{err.get('error', {}).get('message')}"
+ for err in batch_errors_chunk
+ ]
+ )
+ raise ValueError(f"LFS batch endpoint returned errors:\n{message}")
+
+ batch_actions += batch_actions_chunk
+ oid2addop = {add_op.upload_info.sha256.hex(): add_op for add_op in additions}
+
+ # Step 2: ignore files that have already been uploaded
+ filtered_actions = []
+ for action in batch_actions:
+ if action.get("actions") is None:
+ logger.debug(
+ f"Content of file {oid2addop[action['oid']].path_in_repo} is already"
+ " present upstream - skipping upload."
+ )
+ else:
+ filtered_actions.append(action)
+
+ if len(filtered_actions) == 0:
+ logger.debug("No LFS files to upload.")
+ return
+
+ # Step 3: upload files concurrently according to these instructions
+ def _wrapped_lfs_upload(batch_action) -> None:
+ try:
+ operation = oid2addop[batch_action["oid"]]
+ lfs_upload(operation=operation, lfs_batch_action=batch_action, headers=headers, endpoint=endpoint)
+ except Exception as exc:
+ raise RuntimeError(f"Error while uploading '{operation.path_in_repo}' to the Hub.") from exc
+
+ if constants.HF_HUB_ENABLE_HF_TRANSFER:
+ logger.debug(f"Uploading {len(filtered_actions)} LFS files to the Hub using `hf_transfer`.")
+ for action in hf_tqdm(filtered_actions, name="huggingface_hub.lfs_upload"):
+ _wrapped_lfs_upload(action)
+ elif len(filtered_actions) == 1:
+ logger.debug("Uploading 1 LFS file to the Hub")
+ _wrapped_lfs_upload(filtered_actions[0])
+ else:
+ logger.debug(
+ f"Uploading {len(filtered_actions)} LFS files to the Hub using up to {num_threads} threads concurrently"
+ )
+ thread_map(
+ _wrapped_lfs_upload,
+ filtered_actions,
+ desc=f"Upload {len(filtered_actions)} LFS files",
+ max_workers=num_threads,
+ tqdm_class=hf_tqdm,
+ )
+
+
+def _validate_preupload_info(preupload_info: dict):
+ files = preupload_info.get("files")
+ if not isinstance(files, list):
+ raise ValueError("preupload_info is improperly formatted")
+ for file_info in files:
+ if not (
+ isinstance(file_info, dict)
+ and isinstance(file_info.get("path"), str)
+ and isinstance(file_info.get("uploadMode"), str)
+ and (file_info["uploadMode"] in ("lfs", "regular"))
+ ):
+ raise ValueError("preupload_info is improperly formatted:")
+ return preupload_info
+
+
+@validate_hf_hub_args
+def _fetch_upload_modes(
+ additions: Iterable[CommitOperationAdd],
+ repo_type: str,
+ repo_id: str,
+ headers: Dict[str, str],
+ revision: str,
+ endpoint: Optional[str] = None,
+ create_pr: bool = False,
+ gitignore_content: Optional[str] = None,
+) -> None:
+ """
+ Requests the Hub "preupload" endpoint to determine whether each input file should be uploaded as a regular git blob
+ or as git LFS blob. Input `additions` are mutated in-place with the upload mode.
+
+ Args:
+ additions (`Iterable` of :class:`CommitOperationAdd`):
+ Iterable of :class:`CommitOperationAdd` describing the files to
+ upload to the Hub.
+ repo_type (`str`):
+ Type of the repo to upload to: `"model"`, `"dataset"` or `"space"`.
+ repo_id (`str`):
+ A namespace (user or an organization) and a repo name separated
+ by a `/`.
+ headers (`Dict[str, str]`):
+ Headers to use for the request, including authorization headers and user agent.
+ revision (`str`):
+ The git revision to upload the files to. Can be any valid git revision.
+ gitignore_content (`str`, *optional*):
+ The content of the `.gitignore` file to know which files should be ignored. The order of priority
+ is to first check if `gitignore_content` is passed, then check if the `.gitignore` file is present
+ in the list of files to commit and finally default to the `.gitignore` file already hosted on the Hub
+ (if any).
+ Raises:
+ [`~utils.HfHubHTTPError`]
+ If the Hub API returned an error.
+ [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
+ If the Hub API response is improperly formatted.
+ """
+ endpoint = endpoint if endpoint is not None else constants.ENDPOINT
+
+ # Fetch upload mode (LFS or regular) chunk by chunk.
+ upload_modes: Dict[str, UploadMode] = {}
+ should_ignore_info: Dict[str, bool] = {}
+ oid_info: Dict[str, Optional[str]] = {}
+
+ for chunk in chunk_iterable(additions, 256):
+ payload: Dict = {
+ "files": [
+ {
+ "path": op.path_in_repo,
+ "sample": base64.b64encode(op.upload_info.sample).decode("ascii"),
+ "size": op.upload_info.size,
+ }
+ for op in chunk
+ ]
+ }
+ if gitignore_content is not None:
+ payload["gitIgnore"] = gitignore_content
+
+ resp = get_session().post(
+ f"{endpoint}/api/{repo_type}s/{repo_id}/preupload/{revision}",
+ json=payload,
+ headers=headers,
+ params={"create_pr": "1"} if create_pr else None,
+ )
+ hf_raise_for_status(resp)
+ preupload_info = _validate_preupload_info(resp.json())
+ upload_modes.update(**{file["path"]: file["uploadMode"] for file in preupload_info["files"]})
+ should_ignore_info.update(**{file["path"]: file["shouldIgnore"] for file in preupload_info["files"]})
+ oid_info.update(**{file["path"]: file.get("oid") for file in preupload_info["files"]})
+
+ # Set upload mode for each addition operation
+ for addition in additions:
+ addition._upload_mode = upload_modes[addition.path_in_repo]
+ addition._should_ignore = should_ignore_info[addition.path_in_repo]
+ addition._remote_oid = oid_info[addition.path_in_repo]
+
+ # Empty files cannot be uploaded as LFS (S3 would fail with a 501 Not Implemented)
+ # => empty files are uploaded as "regular" to still allow users to commit them.
+ for addition in additions:
+ if addition.upload_info.size == 0:
+ addition._upload_mode = "regular"
+
+
+@validate_hf_hub_args
+def _fetch_files_to_copy(
+ copies: Iterable[CommitOperationCopy],
+ repo_type: str,
+ repo_id: str,
+ headers: Dict[str, str],
+ revision: str,
+ endpoint: Optional[str] = None,
+) -> Dict[Tuple[str, Optional[str]], Union["RepoFile", bytes]]:
+ """
+ Fetch information about the files to copy.
+
+ For LFS files, we only need their metadata (file size and sha256) while for regular files
+ we need to download the raw content from the Hub.
+
+ Args:
+ copies (`Iterable` of :class:`CommitOperationCopy`):
+ Iterable of :class:`CommitOperationCopy` describing the files to
+ copy on the Hub.
+ repo_type (`str`):
+ Type of the repo to upload to: `"model"`, `"dataset"` or `"space"`.
+ repo_id (`str`):
+ A namespace (user or an organization) and a repo name separated
+ by a `/`.
+ headers (`Dict[str, str]`):
+ Headers to use for the request, including authorization headers and user agent.
+ revision (`str`):
+ The git revision to upload the files to. Can be any valid git revision.
+
+ Returns: `Dict[Tuple[str, Optional[str]], Union[RepoFile, bytes]]]`
+ Key is the file path and revision of the file to copy.
+ Value is the raw content as bytes (for regular files) or the file information as a RepoFile (for LFS files).
+
+ Raises:
+ [`~utils.HfHubHTTPError`]
+ If the Hub API returned an error.
+ [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
+ If the Hub API response is improperly formatted.
+ """
+ from .hf_api import HfApi, RepoFolder
+
+ hf_api = HfApi(endpoint=endpoint, headers=headers)
+ files_to_copy: Dict[Tuple[str, Optional[str]], Union["RepoFile", bytes]] = {}
+ # Store (path, revision) -> oid mapping
+ oid_info: Dict[Tuple[str, Optional[str]], Optional[str]] = {}
+ # 1. Fetch OIDs for destination paths in batches.
+ dest_paths = [op.path_in_repo for op in copies]
+ for offset in range(0, len(dest_paths), FETCH_LFS_BATCH_SIZE):
+ dest_repo_files = hf_api.get_paths_info(
+ repo_id=repo_id,
+ paths=dest_paths[offset : offset + FETCH_LFS_BATCH_SIZE],
+ revision=revision,
+ repo_type=repo_type,
+ )
+ for file in dest_repo_files:
+ if not isinstance(file, RepoFolder):
+ oid_info[(file.path, revision)] = file.blob_id
+
+ # 2. Group by source revision and fetch source file info in batches.
+ for src_revision, operations in groupby(copies, key=lambda op: op.src_revision):
+ operations = list(operations) # type: ignore
+ src_paths = [op.src_path_in_repo for op in operations]
+ for offset in range(0, len(src_paths), FETCH_LFS_BATCH_SIZE):
+ src_repo_files = hf_api.get_paths_info(
+ repo_id=repo_id,
+ paths=src_paths[offset : offset + FETCH_LFS_BATCH_SIZE],
+ revision=src_revision or revision,
+ repo_type=repo_type,
+ )
+
+ for src_repo_file in src_repo_files:
+ if isinstance(src_repo_file, RepoFolder):
+ raise NotImplementedError("Copying a folder is not implemented.")
+ oid_info[(src_repo_file.path, src_revision)] = src_repo_file.blob_id
+ # If it's an LFS file, store the RepoFile object. Otherwise, download raw bytes.
+ if src_repo_file.lfs:
+ files_to_copy[(src_repo_file.path, src_revision)] = src_repo_file
+ else:
+ # TODO: (optimization) download regular files to copy concurrently
+ url = hf_hub_url(
+ endpoint=endpoint,
+ repo_type=repo_type,
+ repo_id=repo_id,
+ revision=src_revision or revision,
+ filename=src_repo_file.path,
+ )
+ response = get_session().get(url, headers=headers)
+ hf_raise_for_status(response)
+ files_to_copy[(src_repo_file.path, src_revision)] = response.content
+ # 3. Ensure all operations found a corresponding file in the Hub
+ # and track src/dest OIDs for each operation.
+ for operation in operations:
+ if (operation.src_path_in_repo, src_revision) not in files_to_copy:
+ raise EntryNotFoundError(
+ f"Cannot copy {operation.src_path_in_repo} at revision "
+ f"{src_revision or revision}: file is missing on repo."
+ )
+ operation._src_oid = oid_info.get((operation.src_path_in_repo, operation.src_revision))
+ operation._dest_oid = oid_info.get((operation.path_in_repo, revision))
+ return files_to_copy
+
+
+def _prepare_commit_payload(
+ operations: Iterable[CommitOperation],
+ files_to_copy: Dict[Tuple[str, Optional[str]], Union["RepoFile", bytes]],
+ commit_message: str,
+ commit_description: Optional[str] = None,
+ parent_commit: Optional[str] = None,
+) -> Iterable[Dict[str, Any]]:
+ """
+ Builds the payload to POST to the `/commit` API of the Hub.
+
+ Payload is returned as an iterator so that it can be streamed as a ndjson in the
+ POST request.
+
+ For more information, see:
+ - https://github.com/huggingface/huggingface_hub/issues/1085#issuecomment-1265208073
+ - http://ndjson.org/
+ """
+ commit_description = commit_description if commit_description is not None else ""
+
+ # 1. Send a header item with the commit metadata
+ header_value = {"summary": commit_message, "description": commit_description}
+ if parent_commit is not None:
+ header_value["parentCommit"] = parent_commit
+ yield {"key": "header", "value": header_value}
+
+ nb_ignored_files = 0
+
+ # 2. Send operations, one per line
+ for operation in operations:
+ # Skip ignored files
+ if isinstance(operation, CommitOperationAdd) and operation._should_ignore:
+ logger.debug(f"Skipping file '{operation.path_in_repo}' in commit (ignored by gitignore file).")
+ nb_ignored_files += 1
+ continue
+
+ # 2.a. Case adding a regular file
+ if isinstance(operation, CommitOperationAdd) and operation._upload_mode == "regular":
+ yield {
+ "key": "file",
+ "value": {
+ "content": operation.b64content().decode(),
+ "path": operation.path_in_repo,
+ "encoding": "base64",
+ },
+ }
+ # 2.b. Case adding an LFS file
+ elif isinstance(operation, CommitOperationAdd) and operation._upload_mode == "lfs":
+ yield {
+ "key": "lfsFile",
+ "value": {
+ "path": operation.path_in_repo,
+ "algo": "sha256",
+ "oid": operation.upload_info.sha256.hex(),
+ "size": operation.upload_info.size,
+ },
+ }
+ # 2.c. Case deleting a file or folder
+ elif isinstance(operation, CommitOperationDelete):
+ yield {
+ "key": "deletedFolder" if operation.is_folder else "deletedFile",
+ "value": {"path": operation.path_in_repo},
+ }
+ # 2.d. Case copying a file or folder
+ elif isinstance(operation, CommitOperationCopy):
+ file_to_copy = files_to_copy[(operation.src_path_in_repo, operation.src_revision)]
+ if isinstance(file_to_copy, bytes):
+ yield {
+ "key": "file",
+ "value": {
+ "content": base64.b64encode(file_to_copy).decode(),
+ "path": operation.path_in_repo,
+ "encoding": "base64",
+ },
+ }
+ elif file_to_copy.lfs:
+ yield {
+ "key": "lfsFile",
+ "value": {
+ "path": operation.path_in_repo,
+ "algo": "sha256",
+ "oid": file_to_copy.lfs.sha256,
+ },
+ }
+ else:
+ raise ValueError(
+ "Malformed files_to_copy (should be raw file content as bytes or RepoFile objects with LFS info."
+ )
+ # 2.e. Never expected to happen
+ else:
+ raise ValueError(
+ f"Unknown operation to commit. Operation: {operation}. Upload mode:"
+ f" {getattr(operation, '_upload_mode', None)}"
+ )
+
+ if nb_ignored_files > 0:
+ logger.info(f"Skipped {nb_ignored_files} file(s) in commit (ignored by gitignore file).")
diff --git a/env/Lib/site-packages/huggingface_hub/_commit_scheduler.py b/env/Lib/site-packages/huggingface_hub/_commit_scheduler.py
new file mode 100644
index 0000000000000000000000000000000000000000..f1f20339e7df2d17588623dc13bb3c6be6a46b53
--- /dev/null
+++ b/env/Lib/site-packages/huggingface_hub/_commit_scheduler.py
@@ -0,0 +1,353 @@
+import atexit
+import logging
+import os
+import time
+from concurrent.futures import Future
+from dataclasses import dataclass
+from io import SEEK_END, SEEK_SET, BytesIO
+from pathlib import Path
+from threading import Lock, Thread
+from typing import Dict, List, Optional, Union
+
+from .hf_api import DEFAULT_IGNORE_PATTERNS, CommitInfo, CommitOperationAdd, HfApi
+from .utils import filter_repo_objects
+
+
+logger = logging.getLogger(__name__)
+
+
+@dataclass(frozen=True)
+class _FileToUpload:
+ """Temporary dataclass to store info about files to upload. Not meant to be used directly."""
+
+ local_path: Path
+ path_in_repo: str
+ size_limit: int
+ last_modified: float
+
+
+class CommitScheduler:
+ """
+ Scheduler to upload a local folder to the Hub at regular intervals (e.g. push to hub every 5 minutes).
+
+ The recommended way to use the scheduler is to use it as a context manager. This ensures that the scheduler is
+ properly stopped and the last commit is triggered when the script ends. The scheduler can also be stopped manually
+ with the `stop` method. Checkout the [upload guide](https://huggingface.co/docs/huggingface_hub/guides/upload#scheduled-uploads)
+ to learn more about how to use it.
+
+ Args:
+ repo_id (`str`):
+ The id of the repo to commit to.
+ folder_path (`str` or `Path`):
+ Path to the local folder to upload regularly.
+ every (`int` or `float`, *optional*):
+ The number of minutes between each commit. Defaults to 5 minutes.
+ path_in_repo (`str`, *optional*):
+ Relative path of the directory in the repo, for example: `"checkpoints/"`. Defaults to the root folder
+ of the repository.
+ repo_type (`str`, *optional*):
+ The type of the repo to commit to. Defaults to `model`.
+ revision (`str`, *optional*):
+ The revision of the repo to commit to. Defaults to `main`.
+ private (`bool`, *optional*):
+ Whether to make the repo private. If `None` (default), the repo will be public unless the organization's default is private. This value is ignored if the repo already exists.
+ token (`str`, *optional*):
+ The token to use to commit to the repo. Defaults to the token saved on the machine.
+ allow_patterns (`List[str]` or `str`, *optional*):
+ If provided, only files matching at least one pattern are uploaded.
+ ignore_patterns (`List[str]` or `str`, *optional*):
+ If provided, files matching any of the patterns are not uploaded.
+ squash_history (`bool`, *optional*):
+ Whether to squash the history of the repo after each commit. Defaults to `False`. Squashing commits is
+ useful to avoid degraded performances on the repo when it grows too large.
+ hf_api (`HfApi`, *optional*):
+ The [`HfApi`] client to use to commit to the Hub. Can be set with custom settings (user agent, token,...).
+
+ Example:
+ ```py
+ >>> from pathlib import Path
+ >>> from huggingface_hub import CommitScheduler
+
+ # Scheduler uploads every 10 minutes
+ >>> csv_path = Path("watched_folder/data.csv")
+ >>> CommitScheduler(repo_id="test_scheduler", repo_type="dataset", folder_path=csv_path.parent, every=10)
+
+ >>> with csv_path.open("a") as f:
+ ... f.write("first line")
+
+ # Some time later (...)
+ >>> with csv_path.open("a") as f:
+ ... f.write("second line")
+ ```
+
+ Example using a context manager:
+ ```py
+ >>> from pathlib import Path
+ >>> from huggingface_hub import CommitScheduler
+
+ >>> with CommitScheduler(repo_id="test_scheduler", repo_type="dataset", folder_path="watched_folder", every=10) as scheduler:
+ ... csv_path = Path("watched_folder/data.csv")
+ ... with csv_path.open("a") as f:
+ ... f.write("first line")
+ ... (...)
+ ... with csv_path.open("a") as f:
+ ... f.write("second line")
+
+ # Scheduler is now stopped and last commit have been triggered
+ ```
+ """
+
+ def __init__(
+ self,
+ *,
+ repo_id: str,
+ folder_path: Union[str, Path],
+ every: Union[int, float] = 5,
+ path_in_repo: Optional[str] = None,
+ repo_type: Optional[str] = None,
+ revision: Optional[str] = None,
+ private: Optional[bool] = None,
+ token: Optional[str] = None,
+ allow_patterns: Optional[Union[List[str], str]] = None,
+ ignore_patterns: Optional[Union[List[str], str]] = None,
+ squash_history: bool = False,
+ hf_api: Optional["HfApi"] = None,
+ ) -> None:
+ self.api = hf_api or HfApi(token=token)
+
+ # Folder
+ self.folder_path = Path(folder_path).expanduser().resolve()
+ self.path_in_repo = path_in_repo or ""
+ self.allow_patterns = allow_patterns
+
+ if ignore_patterns is None:
+ ignore_patterns = []
+ elif isinstance(ignore_patterns, str):
+ ignore_patterns = [ignore_patterns]
+ self.ignore_patterns = ignore_patterns + DEFAULT_IGNORE_PATTERNS
+
+ if self.folder_path.is_file():
+ raise ValueError(f"'folder_path' must be a directory, not a file: '{self.folder_path}'.")
+ self.folder_path.mkdir(parents=True, exist_ok=True)
+
+ # Repository
+ repo_url = self.api.create_repo(repo_id=repo_id, private=private, repo_type=repo_type, exist_ok=True)
+ self.repo_id = repo_url.repo_id
+ self.repo_type = repo_type
+ self.revision = revision
+ self.token = token
+
+ # Keep track of already uploaded files
+ self.last_uploaded: Dict[Path, float] = {} # key is local path, value is timestamp
+
+ # Scheduler
+ if not every > 0:
+ raise ValueError(f"'every' must be a positive integer, not '{every}'.")
+ self.lock = Lock()
+ self.every = every
+ self.squash_history = squash_history
+
+ logger.info(f"Scheduled job to push '{self.folder_path}' to '{self.repo_id}' every {self.every} minutes.")
+ self._scheduler_thread = Thread(target=self._run_scheduler, daemon=True)
+ self._scheduler_thread.start()
+ atexit.register(self._push_to_hub)
+
+ self.__stopped = False
+
+ def stop(self) -> None:
+ """Stop the scheduler.
+
+ A stopped scheduler cannot be restarted. Mostly for tests purposes.
+ """
+ self.__stopped = True
+
+ def __enter__(self) -> "CommitScheduler":
+ return self
+
+ def __exit__(self, exc_type, exc_value, traceback) -> None:
+ # Upload last changes before exiting
+ self.trigger().result()
+ self.stop()
+ return
+
+ def _run_scheduler(self) -> None:
+ """Dumb thread waiting between each scheduled push to Hub."""
+ while True:
+ self.last_future = self.trigger()
+ time.sleep(self.every * 60)
+ if self.__stopped:
+ break
+
+ def trigger(self) -> Future:
+ """Trigger a `push_to_hub` and return a future.
+
+ This method is automatically called every `every` minutes. You can also call it manually to trigger a commit
+ immediately, without waiting for the next scheduled commit.
+ """
+ return self.api.run_as_future(self._push_to_hub)
+
+ def _push_to_hub(self) -> Optional[CommitInfo]:
+ if self.__stopped: # If stopped, already scheduled commits are ignored
+ return None
+
+ logger.info("(Background) scheduled commit triggered.")
+ try:
+ value = self.push_to_hub()
+ if self.squash_history:
+ logger.info("(Background) squashing repo history.")
+ self.api.super_squash_history(repo_id=self.repo_id, repo_type=self.repo_type, branch=self.revision)
+ return value
+ except Exception as e:
+ logger.error(f"Error while pushing to Hub: {e}") # Depending on the setup, error might be silenced
+ raise
+
+ def push_to_hub(self) -> Optional[CommitInfo]:
+ """
+ Push folder to the Hub and return the commit info.
+
+
Immediately click login after typing your password or
+it might be stored in plain text in this notebook file.
Copy a token from your Hugging Face
+tokens page and paste it below.
Immediately click login after copying
+your token or it might be stored in plain text in this notebook file. constants.SAFETENSORS_MAX_HEADER_LENGTH:
+ raise SafetensorsParsingError(
+ f"Failed to parse safetensors header for '{filename}' (repo '{repo_id}', revision "
+ f"'{revision or constants.DEFAULT_REVISION}'): safetensors header is too big. Maximum supported size is "
+ f"{constants.SAFETENSORS_MAX_HEADER_LENGTH} bytes (got {metadata_size})."
+ )
+
+ # 3.a. Get metadata from payload
+ if metadata_size <= 100000:
+ metadata_as_bytes = response.content[8 : 8 + metadata_size]
+ else: # 3.b. Request full metadata
+ response = get_session().get(url, headers={**_headers, "range": f"bytes=8-{metadata_size + 7}"})
+ hf_raise_for_status(response)
+ metadata_as_bytes = response.content
+
+ # 4. Parse json header
+ try:
+ metadata_as_dict = json.loads(metadata_as_bytes.decode(errors="ignore"))
+ except json.JSONDecodeError as e:
+ raise SafetensorsParsingError(
+ f"Failed to parse safetensors header for '{filename}' (repo '{repo_id}', revision "
+ f"'{revision or constants.DEFAULT_REVISION}'): header is not json-encoded string. Please make sure this is a "
+ "correctly formatted safetensors file."
+ ) from e
+
+ try:
+ return SafetensorsFileMetadata(
+ metadata=metadata_as_dict.get("__metadata__", {}),
+ tensors={
+ key: TensorInfo(
+ dtype=tensor["dtype"],
+ shape=tensor["shape"],
+ data_offsets=tuple(tensor["data_offsets"]), # type: ignore
+ )
+ for key, tensor in metadata_as_dict.items()
+ if key != "__metadata__"
+ },
+ )
+ except (KeyError, IndexError) as e:
+ raise SafetensorsParsingError(
+ f"Failed to parse safetensors header for '{filename}' (repo '{repo_id}', revision "
+ f"'{revision or constants.DEFAULT_REVISION}'): header format not recognized. Please make sure this is a correctly"
+ " formatted safetensors file."
+ ) from e
+
+ @validate_hf_hub_args
+ def create_branch(
+ self,
+ repo_id: str,
+ *,
+ branch: str,
+ revision: Optional[str] = None,
+ token: Union[bool, str, None] = None,
+ repo_type: Optional[str] = None,
+ exist_ok: bool = False,
+ ) -> None:
+ """
+ Create a new branch for a repo on the Hub, starting from the specified revision (defaults to `main`).
+ To find a revision suiting your needs, you can use [`list_repo_refs`] or [`list_repo_commits`].
+
+ Args:
+ repo_id (`str`):
+ The repository in which the branch will be created.
+ Example: `"user/my-cool-model"`.
+
+ branch (`str`):
+ The name of the branch to create.
+
+ revision (`str`, *optional*):
+ The git revision to create the branch from. It can be a branch name or
+ the OID/SHA of a commit, as a hexadecimal string. Defaults to the head
+ of the `"main"` branch.
+
+ token (Union[bool, str, None], optional):
+ A valid user access token (string). Defaults to the locally saved
+ token, which is the recommended method for authentication (see
+ https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
+ To disable authentication, pass `False`.
+
+ repo_type (`str`, *optional*):
+ Set to `"dataset"` or `"space"` if creating a branch on a dataset or
+ space, `None` or `"model"` if tagging a model. Default is `None`.
+
+ exist_ok (`bool`, *optional*, defaults to `False`):
+ If `True`, do not raise an error if branch already exists.
+
+ Raises:
+ [`~utils.RepositoryNotFoundError`]:
+ If repository is not found (error 404): wrong repo_id/repo_type, private
+ but not authenticated or repo does not exist.
+ [`~utils.BadRequestError`]:
+ If invalid reference for a branch. Ex: `refs/pr/5` or 'refs/foo/bar'.
+ [`~utils.HfHubHTTPError`]:
+ If the branch already exists on the repo (error 409) and `exist_ok` is
+ set to `False`.
+ """
+ if repo_type is None:
+ repo_type = constants.REPO_TYPE_MODEL
+ branch = quote(branch, safe="")
+
+ # Prepare request
+ branch_url = f"{self.endpoint}/api/{repo_type}s/{repo_id}/branch/{branch}"
+ headers = self._build_hf_headers(token=token)
+ payload = {}
+ if revision is not None:
+ payload["startingPoint"] = revision
+
+ # Create branch
+ response = get_session().post(url=branch_url, headers=headers, json=payload)
+ try:
+ hf_raise_for_status(response)
+ except HfHubHTTPError as e:
+ if exist_ok and e.response.status_code == 409:
+ return
+ elif exist_ok and e.response.status_code == 403:
+ # No write permission on the namespace but branch might already exist
+ try:
+ refs = self.list_repo_refs(repo_id=repo_id, repo_type=repo_type, token=token)
+ for branch_ref in refs.branches:
+ if branch_ref.name == branch:
+ return # Branch already exists => do not raise
+ except HfHubHTTPError:
+ pass # We raise the original error if the branch does not exist
+ raise
+
+ @validate_hf_hub_args
+ def delete_branch(
+ self,
+ repo_id: str,
+ *,
+ branch: str,
+ token: Union[bool, str, None] = None,
+ repo_type: Optional[str] = None,
+ ) -> None:
+ """
+ Delete a branch from a repo on the Hub.
+
+ Args:
+ repo_id (`str`):
+ The repository in which a branch will be deleted.
+ Example: `"user/my-cool-model"`.
+
+ branch (`str`):
+ The name of the branch to delete.
+
+ token (Union[bool, str, None], optional):
+ A valid user access token (string). Defaults to the locally saved
+ token, which is the recommended method for authentication (see
+ https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
+ To disable authentication, pass `False`.
+
+ repo_type (`str`, *optional*):
+ Set to `"dataset"` or `"space"` if creating a branch on a dataset or
+ space, `None` or `"model"` if tagging a model. Default is `None`.
+
+ Raises:
+ [`~utils.RepositoryNotFoundError`]:
+ If repository is not found (error 404): wrong repo_id/repo_type, private
+ but not authenticated or repo does not exist.
+ [`~utils.HfHubHTTPError`]:
+ If trying to delete a protected branch. Ex: `main` cannot be deleted.
+ [`~utils.HfHubHTTPError`]:
+ If trying to delete a branch that does not exist.
+
+ """
+ if repo_type is None:
+ repo_type = constants.REPO_TYPE_MODEL
+ branch = quote(branch, safe="")
+
+ # Prepare request
+ branch_url = f"{self.endpoint}/api/{repo_type}s/{repo_id}/branch/{branch}"
+ headers = self._build_hf_headers(token=token)
+
+ # Delete branch
+ response = get_session().delete(url=branch_url, headers=headers)
+ hf_raise_for_status(response)
+
+ @validate_hf_hub_args
+ def create_tag(
+ self,
+ repo_id: str,
+ *,
+ tag: str,
+ tag_message: Optional[str] = None,
+ revision: Optional[str] = None,
+ token: Union[bool, str, None] = None,
+ repo_type: Optional[str] = None,
+ exist_ok: bool = False,
+ ) -> None:
+ """
+ Tag a given commit of a repo on the Hub.
+
+ Args:
+ repo_id (`str`):
+ The repository in which a commit will be tagged.
+ Example: `"user/my-cool-model"`.
+
+ tag (`str`):
+ The name of the tag to create.
+
+ tag_message (`str`, *optional*):
+ The description of the tag to create.
+
+ revision (`str`, *optional*):
+ The git revision to tag. It can be a branch name or the OID/SHA of a
+ commit, as a hexadecimal string. Shorthands (7 first characters) are
+ also supported. Defaults to the head of the `"main"` branch.
+
+ token (Union[bool, str, None], optional):
+ A valid user access token (string). Defaults to the locally saved
+ token, which is the recommended method for authentication (see
+ https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
+ To disable authentication, pass `False`.
+
+ repo_type (`str`, *optional*):
+ Set to `"dataset"` or `"space"` if tagging a dataset or
+ space, `None` or `"model"` if tagging a model. Default is
+ `None`.
+
+ exist_ok (`bool`, *optional*, defaults to `False`):
+ If `True`, do not raise an error if tag already exists.
+
+ Raises:
+ [`~utils.RepositoryNotFoundError`]:
+ If repository is not found (error 404): wrong repo_id/repo_type, private
+ but not authenticated or repo does not exist.
+ [`~utils.RevisionNotFoundError`]:
+ If revision is not found (error 404) on the repo.
+ [`~utils.HfHubHTTPError`]:
+ If the branch already exists on the repo (error 409) and `exist_ok` is
+ set to `False`.
+ """
+ if repo_type is None:
+ repo_type = constants.REPO_TYPE_MODEL
+ revision = quote(revision, safe="") if revision is not None else constants.DEFAULT_REVISION
+
+ # Prepare request
+ tag_url = f"{self.endpoint}/api/{repo_type}s/{repo_id}/tag/{revision}"
+ headers = self._build_hf_headers(token=token)
+ payload = {"tag": tag}
+ if tag_message is not None:
+ payload["message"] = tag_message
+
+ # Tag
+ response = get_session().post(url=tag_url, headers=headers, json=payload)
+ try:
+ hf_raise_for_status(response)
+ except HfHubHTTPError as e:
+ if not (e.response.status_code == 409 and exist_ok):
+ raise
+
+ @validate_hf_hub_args
+ def delete_tag(
+ self,
+ repo_id: str,
+ *,
+ tag: str,
+ token: Union[bool, str, None] = None,
+ repo_type: Optional[str] = None,
+ ) -> None:
+ """
+ Delete a tag from a repo on the Hub.
+
+ Args:
+ repo_id (`str`):
+ The repository in which a tag will be deleted.
+ Example: `"user/my-cool-model"`.
+
+ tag (`str`):
+ The name of the tag to delete.
+
+ token (Union[bool, str, None], optional):
+ A valid user access token (string). Defaults to the locally saved
+ token, which is the recommended method for authentication (see
+ https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
+ To disable authentication, pass `False`.
+
+ repo_type (`str`, *optional*):
+ Set to `"dataset"` or `"space"` if tagging a dataset or space, `None` or
+ `"model"` if tagging a model. Default is `None`.
+
+ Raises:
+ [`~utils.RepositoryNotFoundError`]:
+ If repository is not found (error 404): wrong repo_id/repo_type, private
+ but not authenticated or repo does not exist.
+ [`~utils.RevisionNotFoundError`]:
+ If tag is not found.
+ """
+ if repo_type is None:
+ repo_type = constants.REPO_TYPE_MODEL
+ tag = quote(tag, safe="")
+
+ # Prepare request
+ tag_url = f"{self.endpoint}/api/{repo_type}s/{repo_id}/tag/{tag}"
+ headers = self._build_hf_headers(token=token)
+
+ # Un-tag
+ response = get_session().delete(url=tag_url, headers=headers)
+ hf_raise_for_status(response)
+
+ @validate_hf_hub_args
+ def get_full_repo_name(
+ self,
+ model_id: str,
+ *,
+ organization: Optional[str] = None,
+ token: Union[bool, str, None] = None,
+ ):
+ """
+ Returns the repository name for a given model ID and optional
+ organization.
+
+ Args:
+ model_id (`str`):
+ The name of the model.
+ organization (`str`, *optional*):
+ If passed, the repository name will be in the organization
+ namespace instead of the user namespace.
+ token (Union[bool, str, None], optional):
+ A valid user access token (string). Defaults to the locally saved
+ token, which is the recommended method for authentication (see
+ https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
+ To disable authentication, pass `False`.
+
+ Returns:
+ `str`: The repository name in the user's namespace
+ ({username}/{model_id}) if no organization is passed, and under the
+ organization namespace ({organization}/{model_id}) otherwise.
+ """
+ if organization is None:
+ if "/" in model_id:
+ username = model_id.split("/")[0]
+ else:
+ username = self.whoami(token=token)["name"] # type: ignore
+ return f"{username}/{model_id}"
+ else:
+ return f"{organization}/{model_id}"
+
+ @validate_hf_hub_args
+ def get_repo_discussions(
+ self,
+ repo_id: str,
+ *,
+ author: Optional[str] = None,
+ discussion_type: Optional[constants.DiscussionTypeFilter] = None,
+ discussion_status: Optional[constants.DiscussionStatusFilter] = None,
+ repo_type: Optional[str] = None,
+ token: Union[bool, str, None] = None,
+ ) -> Iterator[Discussion]:
+ """
+ Fetches Discussions and Pull Requests for the given repo.
+
+ Args:
+ repo_id (`str`):
+ A namespace (user or an organization) and a repo name separated
+ by a `/`.
+ author (`str`, *optional*):
+ Pass a value to filter by discussion author. `None` means no filter.
+ Default is `None`.
+ discussion_type (`str`, *optional*):
+ Set to `"pull_request"` to fetch only pull requests, `"discussion"`
+ to fetch only discussions. Set to `"all"` or `None` to fetch both.
+ Default is `None`.
+ discussion_status (`str`, *optional*):
+ Set to `"open"` (respectively `"closed"`) to fetch only open
+ (respectively closed) discussions. Set to `"all"` or `None`
+ to fetch both.
+ Default is `None`.
+ repo_type (`str`, *optional*):
+ Set to `"dataset"` or `"space"` if fetching from a dataset or
+ space, `None` or `"model"` if fetching from a model. Default is
+ `None`.
+ token (Union[bool, str, None], optional):
+ A valid user access token (string). Defaults to the locally saved
+ token, which is the recommended method for authentication (see
+ https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
+ To disable authentication, pass `False`.
+
+ Returns:
+ `Iterator[Discussion]`: An iterator of [`Discussion`] objects.
+
+ Example:
+ Collecting all discussions of a repo in a list:
+
+ ```python
+ >>> from huggingface_hub import get_repo_discussions
+ >>> discussions_list = list(get_repo_discussions(repo_id="bert-base-uncased"))
+ ```
+
+ Iterating over discussions of a repo:
+
+ ```python
+ >>> from huggingface_hub import get_repo_discussions
+ >>> for discussion in get_repo_discussions(repo_id="bert-base-uncased"):
+ ... print(discussion.num, discussion.title)
+ ```
+ """
+ if repo_type not in constants.REPO_TYPES:
+ raise ValueError(f"Invalid repo type, must be one of {constants.REPO_TYPES}")
+ if repo_type is None:
+ repo_type = constants.REPO_TYPE_MODEL
+ if discussion_type is not None and discussion_type not in constants.DISCUSSION_TYPES:
+ raise ValueError(f"Invalid discussion_type, must be one of {constants.DISCUSSION_TYPES}")
+ if discussion_status is not None and discussion_status not in constants.DISCUSSION_STATUS:
+ raise ValueError(f"Invalid discussion_status, must be one of {constants.DISCUSSION_STATUS}")
+
+ headers = self._build_hf_headers(token=token)
+ path = f"{self.endpoint}/api/{repo_type}s/{repo_id}/discussions"
+
+ params: Dict[str, Union[str, int]] = {}
+ if discussion_type is not None:
+ params["type"] = discussion_type
+ if discussion_status is not None:
+ params["status"] = discussion_status
+ if author is not None:
+ params["author"] = author
+
+ def _fetch_discussion_page(page_index: int):
+ params["p"] = page_index
+ resp = get_session().get(path, headers=headers, params=params)
+ hf_raise_for_status(resp)
+ paginated_discussions = resp.json()
+ total = paginated_discussions["count"]
+ start = paginated_discussions["start"]
+ discussions = paginated_discussions["discussions"]
+ has_next = (start + len(discussions)) < total
+ return discussions, has_next
+
+ has_next, page_index = True, 0
+
+ while has_next:
+ discussions, has_next = _fetch_discussion_page(page_index=page_index)
+ for discussion in discussions:
+ yield Discussion(
+ title=discussion["title"],
+ num=discussion["num"],
+ author=discussion.get("author", {}).get("name", "deleted"),
+ created_at=parse_datetime(discussion["createdAt"]),
+ status=discussion["status"],
+ repo_id=discussion["repo"]["name"],
+ repo_type=discussion["repo"]["type"],
+ is_pull_request=discussion["isPullRequest"],
+ endpoint=self.endpoint,
+ )
+ page_index = page_index + 1
+
+ @validate_hf_hub_args
+ def get_discussion_details(
+ self,
+ repo_id: str,
+ discussion_num: int,
+ *,
+ repo_type: Optional[str] = None,
+ token: Union[bool, str, None] = None,
+ ) -> DiscussionWithDetails:
+ """Fetches a Discussion's / Pull Request 's details from the Hub.
+
+ Args:
+ repo_id (`str`):
+ A namespace (user or an organization) and a repo name separated
+ by a `/`.
+ discussion_num (`int`):
+ The number of the Discussion or Pull Request . Must be a strictly positive integer.
+ repo_type (`str`, *optional*):
+ Set to `"dataset"` or `"space"` if uploading to a dataset or
+ space, `None` or `"model"` if uploading to a model. Default is
+ `None`.
+ token (Union[bool, str, None], optional):
+ A valid user access token (string). Defaults to the locally saved
+ token, which is the recommended method for authentication (see
+ https://huggingface.co/docs/huggingface_hub/quick-start#authentication).
+ To disable authentication, pass `False`.
+
+ Returns: [`DiscussionWithDetails`]
+
+
View Model Plot
\n"
+ path_to_plot = "./model.png"
+ model_card += f"\n\n"
+ model_card += "\n # pre-release
+ [-_\.]?
+ (?P