Update README.md
Browse files
README.md
CHANGED
@@ -6,6 +6,7 @@ language:
|
|
6 |
# scripts
|
7 |
|
8 |
Personal scripts to automate some tasks.\
|
|
|
9 |
Will try to keep external module use to a minimum, other than **huggingface_hub**.\
|
10 |
Feel free to send in pull requests or use this code however you'd like.\
|
11 |
*[GitHub mirror](https://github.com/anthonyg5005/hf-scripts)*
|
@@ -16,12 +17,16 @@ Feel free to send in pull requests or use this code however you'd like.\
|
|
16 |
|
17 |
- [Manage branches (create/delete)](https://huggingface.co/Anthonyg5005/hf-scripts/blob/main/manage%20branches.py)
|
18 |
|
19 |
-
- [EXL2 Private Quant
|
20 |
|
21 |
## work in progress/not tested ([unfinished](https://huggingface.co/Anthonyg5005/hf-scripts/tree/unfinished) branch)
|
22 |
|
23 |
-
-
|
|
|
|
|
|
|
24 |
- Will create repo and have quants from 2-6 bpw (or custom) on individual branches
|
|
|
25 |
|
26 |
- Upload folder
|
27 |
- Will allow to upload a folder to existing or new repo
|
@@ -36,7 +41,7 @@ Feel free to send in pull requests or use this code however you'd like.\
|
|
36 |
- Run script and follow prompts. You will be required to be logged in to HF Hub. If you are not logged in, you will need a WRITE token. You can get one in your [HuggingFace settings](https://huggingface.co/settings/tokens). May get some updates in the future for handling more situations. All active updates will be on the [unfinished](https://huggingface.co/Anthonyg5005/hf-scripts/tree/unfinished) branch. Colab and Kaggle keys are supported.
|
37 |
|
38 |
- EXL2 Private Quant
|
39 |
-
- Allows you to quantize to exl2 using colab. This version creates a exl2 quant to
|
40 |
|
41 |
- Download models
|
42 |
- Make sure you have [requests](https://pypi.org/project/requests/) and [tqdm](https://pypi.org/project/tqdm/) installed. You can install them with '`pip install requests tqdm`'. To use the script, open a terminal and run '`python download-model.py USER/MODEL:BRANCH`'. There's also a '`--help`' flag to show the available arguments. To download from private repositories, make sure to login using '`huggingface-cli login`' or (not recommended) `HF_TOKEN` environment variable.
|
|
|
6 |
# scripts
|
7 |
|
8 |
Personal scripts to automate some tasks.\
|
9 |
+
Most of this is to get me familiar with python and hf_hub.\
|
10 |
Will try to keep external module use to a minimum, other than **huggingface_hub**.\
|
11 |
Feel free to send in pull requests or use this code however you'd like.\
|
12 |
*[GitHub mirror](https://github.com/anthonyg5005/hf-scripts)*
|
|
|
17 |
|
18 |
- [Manage branches (create/delete)](https://huggingface.co/Anthonyg5005/hf-scripts/blob/main/manage%20branches.py)
|
19 |
|
20 |
+
- [EXL2 Private Quant V2](https://huggingface.co/Anthonyg5005/hf-scripts/blob/main/EXL2_Private_Quant_V2.ipynb) **(COLAB)** testing, will potentially be available within two hours of this update, in files for now.
|
21 |
|
22 |
## work in progress/not tested ([unfinished](https://huggingface.co/Anthonyg5005/hf-scripts/tree/unfinished) branch)
|
23 |
|
24 |
+
- EXL2 Private Quant V3
|
25 |
+
- Allow for converting jsonl to parquet and bin to safetensors
|
26 |
+
|
27 |
+
- Auto exl2 upload ipynb
|
28 |
- Will create repo and have quants from 2-6 bpw (or custom) on individual branches
|
29 |
+
- Also batch/bash scripts afterwards for non jupyterlab environments
|
30 |
|
31 |
- Upload folder
|
32 |
- Will allow to upload a folder to existing or new repo
|
|
|
41 |
- Run script and follow prompts. You will be required to be logged in to HF Hub. If you are not logged in, you will need a WRITE token. You can get one in your [HuggingFace settings](https://huggingface.co/settings/tokens). May get some updates in the future for handling more situations. All active updates will be on the [unfinished](https://huggingface.co/Anthonyg5005/hf-scripts/tree/unfinished) branch. Colab and Kaggle keys are supported.
|
42 |
|
43 |
- EXL2 Private Quant
|
44 |
+
- Allows you to quantize to exl2 using colab. This version creates a exl2 quant to upload to private repo. Should work on any Linux jupyterlab server with CUDA, ROCM should be supported by exl2 but not tested. Currently being built to later on become the base of `'Auto exl2 upload ipynb'`.
|
45 |
|
46 |
- Download models
|
47 |
- Make sure you have [requests](https://pypi.org/project/requests/) and [tqdm](https://pypi.org/project/tqdm/) installed. You can install them with '`pip install requests tqdm`'. To use the script, open a terminal and run '`python download-model.py USER/MODEL:BRANCH`'. There's also a '`--help`' flag to show the available arguments. To download from private repositories, make sure to login using '`huggingface-cli login`' or (not recommended) `HF_TOKEN` environment variable.
|