pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-generation | transformers |
# DavidAU/Ministral-3b-instruct-Q8_0-GGUF
This model was converted to GGUF format from [`ministral/Ministral-3b-instruct`](https://huggingface.co/ministral/Ministral-3b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ministral/Ministral-3b-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Ministral-3b-instruct-Q8_0-GGUF --model ministral-3b-instruct.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Ministral-3b-instruct-Q8_0-GGUF --model ministral-3b-instruct.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m ministral-3b-instruct.Q8_0.gguf -n 128
```
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"], "inference": {"parameters": {"temperature": 1, "top_p": 0.95, "top_k": 40, "repetition_penalty": 1.2}}, "pipeline_tag": "text-generation"} | DavidAU/Ministral-3b-instruct-Q8_0-GGUF | null | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:14:20+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #llama-cpp #gguf-my-repo #text-generation #en #license-apache-2.0 #endpoints_compatible #region-us
|
# DavidAU/Ministral-3b-instruct-Q8_0-GGUF
This model was converted to GGUF format from 'ministral/Ministral-3b-instruct' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Ministral-3b-instruct-Q8_0-GGUF\nThis model was converted to GGUF format from 'ministral/Ministral-3b-instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #text-generation #en #license-apache-2.0 #endpoints_compatible #region-us \n",
"# DavidAU/Ministral-3b-instruct-Q8_0-GGUF\nThis model was converted to GGUF format from 'ministral/Ministral-3b-instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | null |
# DavidAU/Neversleep-3B-Instruct-v0.1-Q8_0-GGUF
This model was converted to GGUF format from [`crimsonjoo/Neversleep-3B-Instruct-v0.1`](https://huggingface.co/crimsonjoo/Neversleep-3B-Instruct-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/crimsonjoo/Neversleep-3B-Instruct-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Neversleep-3B-Instruct-v0.1-Q8_0-GGUF --model neversleep-3b-instruct-v0.1.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Neversleep-3B-Instruct-v0.1-Q8_0-GGUF --model neversleep-3b-instruct-v0.1.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m neversleep-3b-instruct-v0.1.Q8_0.gguf -n 128
```
| {"license": "apache-2.0", "tags": ["generated_from_trainer", "llama-cpp", "gguf-my-repo"], "base_model": "yanolja/EEVE-Korean-2.8B-v1.0"} | DavidAU/Neversleep-3B-Instruct-v0.1-Q8_0-GGUF | null | [
"gguf",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"base_model:yanolja/EEVE-Korean-2.8B-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-04-17T03:15:07+00:00 | [] | [] | TAGS
#gguf #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-yanolja/EEVE-Korean-2.8B-v1.0 #license-apache-2.0 #region-us
|
# DavidAU/Neversleep-3B-Instruct-v0.1-Q8_0-GGUF
This model was converted to GGUF format from 'crimsonjoo/Neversleep-3B-Instruct-v0.1' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Neversleep-3B-Instruct-v0.1-Q8_0-GGUF\nThis model was converted to GGUF format from 'crimsonjoo/Neversleep-3B-Instruct-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-yanolja/EEVE-Korean-2.8B-v1.0 #license-apache-2.0 #region-us \n",
"# DavidAU/Neversleep-3B-Instruct-v0.1-Q8_0-GGUF\nThis model was converted to GGUF format from 'crimsonjoo/Neversleep-3B-Instruct-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | transformers |
# DavidAU/French-Alpaca-Croissant-1.3B-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`AdrienB134/French-Alpaca-Croissant-1.3B-Instruct`](https://huggingface.co/AdrienB134/French-Alpaca-Croissant-1.3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/AdrienB134/French-Alpaca-Croissant-1.3B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/French-Alpaca-Croissant-1.3B-Instruct-Q8_0-GGUF --model french-alpaca-croissant-1.3b-instruct.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/French-Alpaca-Croissant-1.3B-Instruct-Q8_0-GGUF --model french-alpaca-croissant-1.3b-instruct.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m french-alpaca-croissant-1.3b-instruct.Q8_0.gguf -n 128
```
| {"language": ["en", "fr"], "license": "mit", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft", "llama-cpp", "gguf-my-repo"], "datasets": ["jpacifico/French-Alpaca-dataset-Instruct-110K"], "base_model": "croissantllm/CroissantLLMBase"} | DavidAU/French-Alpaca-Croissant-1.3B-Instruct-Q8_0-GGUF | null | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"en",
"fr",
"dataset:jpacifico/French-Alpaca-dataset-Instruct-110K",
"base_model:croissantllm/CroissantLLMBase",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:15:27+00:00 | [] | [
"en",
"fr"
] | TAGS
#transformers #gguf #text-generation-inference #unsloth #llama #trl #sft #llama-cpp #gguf-my-repo #en #fr #dataset-jpacifico/French-Alpaca-dataset-Instruct-110K #base_model-croissantllm/CroissantLLMBase #license-mit #endpoints_compatible #region-us
|
# DavidAU/French-Alpaca-Croissant-1.3B-Instruct-Q8_0-GGUF
This model was converted to GGUF format from 'AdrienB134/French-Alpaca-Croissant-1.3B-Instruct' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/French-Alpaca-Croissant-1.3B-Instruct-Q8_0-GGUF\nThis model was converted to GGUF format from 'AdrienB134/French-Alpaca-Croissant-1.3B-Instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #text-generation-inference #unsloth #llama #trl #sft #llama-cpp #gguf-my-repo #en #fr #dataset-jpacifico/French-Alpaca-dataset-Instruct-110K #base_model-croissantllm/CroissantLLMBase #license-mit #endpoints_compatible #region-us \n",
"# DavidAU/French-Alpaca-Croissant-1.3B-Instruct-Q8_0-GGUF\nThis model was converted to GGUF format from 'AdrienB134/French-Alpaca-Croissant-1.3B-Instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/cutycat2000x/MeowGPT-3.5
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MeowGPT-3.5-GGUF/resolve/main/MeowGPT-3.5.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "mit", "library_name": "transformers", "tags": ["freeai", "conversational", "meowgpt", "gpt", "free", "opensource", "splittic", "ai"], "base_model": "cutycat2000x/MeowGPT-3.5", "quantized_by": "mradermacher"} | mradermacher/MeowGPT-3.5-GGUF | null | [
"transformers",
"gguf",
"freeai",
"conversational",
"meowgpt",
"gpt",
"free",
"opensource",
"splittic",
"ai",
"en",
"base_model:cutycat2000x/MeowGPT-3.5",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:15:54+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #freeai #conversational #meowgpt #gpt #free #opensource #splittic #ai #en #base_model-cutycat2000x/MeowGPT-3.5 #license-mit #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #freeai #conversational #meowgpt #gpt #free #opensource #splittic #ai #en #base_model-cutycat2000x/MeowGPT-3.5 #license-mit #endpoints_compatible #region-us \n"
] |
audio-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5938
- Accuracy: 0.83
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 113 | 1.8713 | 0.49 |
| No log | 2.0 | 226 | 1.2682 | 0.67 |
| No log | 3.0 | 339 | 1.0483 | 0.69 |
| No log | 4.0 | 452 | 0.9157 | 0.71 |
| 1.2624 | 5.0 | 565 | 0.6962 | 0.8 |
| 1.2624 | 6.0 | 678 | 0.6089 | 0.84 |
| 1.2624 | 7.0 | 791 | 0.5878 | 0.8 |
| 1.2624 | 8.0 | 904 | 0.5988 | 0.81 |
| 1.2624 | 9.0 | 1017 | 0.6077 | 0.81 |
| 0.295 | 10.0 | 1130 | 0.5938 | 0.83 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["marsyas/gtzan"], "metrics": ["accuracy"], "base_model": "ntu-spml/distilhubert", "model-index": [{"name": "distilhubert-finetuned-gtzan", "results": [{"task": {"type": "audio-classification", "name": "Audio Classification"}, "dataset": {"name": "GTZAN", "type": "marsyas/gtzan", "config": "all", "split": "train", "args": "all"}, "metrics": [{"type": "accuracy", "value": 0.83, "name": "Accuracy"}]}]}]} | FredDYyy/distilhubert-finetuned-gtzan | null | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:16:18+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #hubert #audio-classification #generated_from_trainer #dataset-marsyas/gtzan #base_model-ntu-spml/distilhubert #license-apache-2.0 #model-index #endpoints_compatible #region-us
| distilhubert-finetuned-gtzan
============================
This model is a fine-tuned version of ntu-spml/distilhubert on the GTZAN dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5938
* Accuracy: 0.83
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #hubert #audio-classification #generated_from_trainer #dataset-marsyas/gtzan #base_model-ntu-spml/distilhubert #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# DavidAU/starcoder2-3b-instruct-Q8_0-GGUF
This model was converted to GGUF format from [`TechxGenus/starcoder2-3b-instruct`](https://huggingface.co/TechxGenus/starcoder2-3b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TechxGenus/starcoder2-3b-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/starcoder2-3b-instruct-Q8_0-GGUF --model starcoder2-3b-instruct.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/starcoder2-3b-instruct-Q8_0-GGUF --model starcoder2-3b-instruct.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m starcoder2-3b-instruct.Q8_0.gguf -n 128
```
| {"license": "bigcode-openrail-m", "library_name": "transformers", "tags": ["code", "starcoder2", "llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation"} | DavidAU/starcoder2-3b-instruct-Q8_0-GGUF | null | [
"transformers",
"gguf",
"code",
"starcoder2",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"license:bigcode-openrail-m",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:16:18+00:00 | [] | [] | TAGS
#transformers #gguf #code #starcoder2 #llama-cpp #gguf-my-repo #text-generation #license-bigcode-openrail-m #endpoints_compatible #region-us
|
# DavidAU/starcoder2-3b-instruct-Q8_0-GGUF
This model was converted to GGUF format from 'TechxGenus/starcoder2-3b-instruct' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/starcoder2-3b-instruct-Q8_0-GGUF\nThis model was converted to GGUF format from 'TechxGenus/starcoder2-3b-instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #code #starcoder2 #llama-cpp #gguf-my-repo #text-generation #license-bigcode-openrail-m #endpoints_compatible #region-us \n",
"# DavidAU/starcoder2-3b-instruct-Q8_0-GGUF\nThis model was converted to GGUF format from 'TechxGenus/starcoder2-3b-instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | transformers |
# DavidAU/stable-code-instruct-3b-Q6_K-GGUF
This model was converted to GGUF format from [`stabilityai/stable-code-instruct-3b`](https://huggingface.co/stabilityai/stable-code-instruct-3b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/stabilityai/stable-code-instruct-3b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/stable-code-instruct-3b-Q6_K-GGUF --model stable-code-instruct-3b.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/stable-code-instruct-3b-Q6_K-GGUF --model stable-code-instruct-3b.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m stable-code-instruct-3b.Q6_K.gguf -n 128
```
| {"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["causal-lm", "code", "llama-cpp", "gguf-my-repo"], "metrics": ["code_eval"], "model-index": [{"name": "stabilityai/stable-code-instruct-3b", "results": [{"task": {"type": "text-generation"}, "dataset": {"name": "MultiPL-HumanEval (Python)", "type": "nuprl/MultiPL-E"}, "metrics": [{"type": "pass@1", "value": 32.4, "name": "pass@1", "verified": false}, {"type": "pass@1", "value": 30.9, "name": "pass@1", "verified": false}, {"type": "pass@1", "value": 32.1, "name": "pass@1", "verified": false}, {"type": "pass@1", "value": 32.1, "name": "pass@1", "verified": false}, {"type": "pass@1", "value": 24.2, "name": "pass@1", "verified": false}, {"type": "pass@1", "value": 23.0, "name": "pass@1", "verified": false}]}]}]} | DavidAU/stable-code-instruct-3b-Q6_K-GGUF | null | [
"transformers",
"gguf",
"causal-lm",
"code",
"llama-cpp",
"gguf-my-repo",
"en",
"license:other",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:17:19+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #causal-lm #code #llama-cpp #gguf-my-repo #en #license-other #model-index #endpoints_compatible #region-us
|
# DavidAU/stable-code-instruct-3b-Q6_K-GGUF
This model was converted to GGUF format from 'stabilityai/stable-code-instruct-3b' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/stable-code-instruct-3b-Q6_K-GGUF\nThis model was converted to GGUF format from 'stabilityai/stable-code-instruct-3b' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #causal-lm #code #llama-cpp #gguf-my-repo #en #license-other #model-index #endpoints_compatible #region-us \n",
"# DavidAU/stable-code-instruct-3b-Q6_K-GGUF\nThis model was converted to GGUF format from 'stabilityai/stable-code-instruct-3b' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me2-seqsight_65536_512_47M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7717
- F1 Score: 0.5768
- Accuracy: 0.5816
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.666 | 16.67 | 200 | 0.6762 | 0.5703 | 0.5839 |
| 0.6246 | 33.33 | 400 | 0.6966 | 0.5662 | 0.5735 |
| 0.5972 | 50.0 | 600 | 0.7075 | 0.5821 | 0.5885 |
| 0.5733 | 66.67 | 800 | 0.7158 | 0.5759 | 0.5855 |
| 0.5576 | 83.33 | 1000 | 0.7148 | 0.5738 | 0.5803 |
| 0.5481 | 100.0 | 1200 | 0.7429 | 0.5743 | 0.5751 |
| 0.5398 | 116.67 | 1400 | 0.7456 | 0.5738 | 0.5823 |
| 0.5344 | 133.33 | 1600 | 0.7464 | 0.5725 | 0.5865 |
| 0.5298 | 150.0 | 1800 | 0.7827 | 0.5758 | 0.5826 |
| 0.5251 | 166.67 | 2000 | 0.7499 | 0.5792 | 0.5793 |
| 0.5205 | 183.33 | 2200 | 0.7612 | 0.5762 | 0.5849 |
| 0.5152 | 200.0 | 2400 | 0.7661 | 0.5853 | 0.5885 |
| 0.5106 | 216.67 | 2600 | 0.7705 | 0.5794 | 0.5888 |
| 0.5034 | 233.33 | 2800 | 0.7775 | 0.5785 | 0.5829 |
| 0.5007 | 250.0 | 3000 | 0.7887 | 0.5813 | 0.5859 |
| 0.4946 | 266.67 | 3200 | 0.7990 | 0.5809 | 0.5868 |
| 0.4889 | 283.33 | 3400 | 0.7906 | 0.5833 | 0.5891 |
| 0.4832 | 300.0 | 3600 | 0.8005 | 0.5781 | 0.5803 |
| 0.4759 | 316.67 | 3800 | 0.8250 | 0.5811 | 0.5823 |
| 0.4696 | 333.33 | 4000 | 0.8102 | 0.5756 | 0.5836 |
| 0.4644 | 350.0 | 4200 | 0.8008 | 0.5750 | 0.5833 |
| 0.4587 | 366.67 | 4400 | 0.8618 | 0.5700 | 0.5702 |
| 0.4503 | 383.33 | 4600 | 0.8464 | 0.5712 | 0.5718 |
| 0.4471 | 400.0 | 4800 | 0.8315 | 0.5724 | 0.5764 |
| 0.4394 | 416.67 | 5000 | 0.8462 | 0.5699 | 0.5754 |
| 0.4329 | 433.33 | 5200 | 0.8581 | 0.5730 | 0.5833 |
| 0.4292 | 450.0 | 5400 | 0.8618 | 0.5720 | 0.5777 |
| 0.423 | 466.67 | 5600 | 0.8812 | 0.5654 | 0.5670 |
| 0.4174 | 483.33 | 5800 | 0.8591 | 0.5693 | 0.5745 |
| 0.4128 | 500.0 | 6000 | 0.8638 | 0.5667 | 0.5692 |
| 0.4072 | 516.67 | 6200 | 0.8730 | 0.5728 | 0.5790 |
| 0.4042 | 533.33 | 6400 | 0.8903 | 0.5692 | 0.5732 |
| 0.3984 | 550.0 | 6600 | 0.8926 | 0.5694 | 0.5689 |
| 0.3957 | 566.67 | 6800 | 0.8753 | 0.5671 | 0.5689 |
| 0.3926 | 583.33 | 7000 | 0.8916 | 0.5696 | 0.5748 |
| 0.3878 | 600.0 | 7200 | 0.8706 | 0.5654 | 0.5650 |
| 0.3853 | 616.67 | 7400 | 0.9053 | 0.5662 | 0.5673 |
| 0.38 | 633.33 | 7600 | 0.9107 | 0.5714 | 0.5774 |
| 0.3786 | 650.0 | 7800 | 0.9142 | 0.5716 | 0.5764 |
| 0.3749 | 666.67 | 8000 | 0.9260 | 0.5666 | 0.5705 |
| 0.3734 | 683.33 | 8200 | 0.9248 | 0.5685 | 0.5738 |
| 0.3713 | 700.0 | 8400 | 0.9235 | 0.5728 | 0.5793 |
| 0.3685 | 716.67 | 8600 | 0.9179 | 0.5701 | 0.5735 |
| 0.3675 | 733.33 | 8800 | 0.8993 | 0.5690 | 0.5712 |
| 0.3663 | 750.0 | 9000 | 0.9203 | 0.5676 | 0.5705 |
| 0.3639 | 766.67 | 9200 | 0.9182 | 0.5711 | 0.5748 |
| 0.3614 | 783.33 | 9400 | 0.9328 | 0.5686 | 0.5705 |
| 0.3617 | 800.0 | 9600 | 0.9240 | 0.5698 | 0.5745 |
| 0.3611 | 816.67 | 9800 | 0.9229 | 0.5724 | 0.5767 |
| 0.361 | 833.33 | 10000 | 0.9249 | 0.5685 | 0.5725 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_65536_512_47M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_65536_512_47M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-04-17T03:17:26+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H3K4me2-seqsight\_65536\_512\_47M-L32\_all
====================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7717
* F1 Score: 0.5768
* Accuracy: 0.5816
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K9ac-seqsight_65536_512_47M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8796
- F1 Score: 0.6063
- Accuracy: 0.6056
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6675 | 18.18 | 200 | 0.6827 | 0.5562 | 0.5804 |
| 0.6132 | 36.36 | 400 | 0.6947 | 0.5834 | 0.5833 |
| 0.5782 | 54.55 | 600 | 0.7236 | 0.5806 | 0.5804 |
| 0.549 | 72.73 | 800 | 0.7383 | 0.5815 | 0.5808 |
| 0.5299 | 90.91 | 1000 | 0.7243 | 0.5903 | 0.5894 |
| 0.5192 | 109.09 | 1200 | 0.7358 | 0.5795 | 0.5804 |
| 0.5117 | 127.27 | 1400 | 0.7422 | 0.5970 | 0.5981 |
| 0.5039 | 145.45 | 1600 | 0.7369 | 0.5910 | 0.5909 |
| 0.4977 | 163.64 | 1800 | 0.7361 | 0.5971 | 0.6002 |
| 0.4922 | 181.82 | 2000 | 0.7444 | 0.5935 | 0.5927 |
| 0.4857 | 200.0 | 2200 | 0.7527 | 0.5938 | 0.5934 |
| 0.48 | 218.18 | 2400 | 0.7541 | 0.5999 | 0.6009 |
| 0.4742 | 236.36 | 2600 | 0.7643 | 0.5928 | 0.5919 |
| 0.4686 | 254.55 | 2800 | 0.7712 | 0.5974 | 0.5970 |
| 0.4615 | 272.73 | 3000 | 0.7758 | 0.6003 | 0.5999 |
| 0.4533 | 290.91 | 3200 | 0.7859 | 0.6009 | 0.6006 |
| 0.4482 | 309.09 | 3400 | 0.7965 | 0.6025 | 0.6020 |
| 0.4426 | 327.27 | 3600 | 0.7669 | 0.6063 | 0.6056 |
| 0.4368 | 345.45 | 3800 | 0.7992 | 0.6057 | 0.6132 |
| 0.4303 | 363.64 | 4000 | 0.7978 | 0.6028 | 0.6027 |
| 0.4241 | 381.82 | 4200 | 0.8176 | 0.6065 | 0.6060 |
| 0.4198 | 400.0 | 4400 | 0.8151 | 0.6074 | 0.6067 |
| 0.4147 | 418.18 | 4600 | 0.8148 | 0.6084 | 0.6078 |
| 0.4095 | 436.36 | 4800 | 0.8033 | 0.6021 | 0.6013 |
| 0.4058 | 454.55 | 5000 | 0.8319 | 0.6077 | 0.6078 |
| 0.401 | 472.73 | 5200 | 0.8249 | 0.6016 | 0.6009 |
| 0.3981 | 490.91 | 5400 | 0.8068 | 0.6104 | 0.6114 |
| 0.392 | 509.09 | 5600 | 0.8227 | 0.6098 | 0.6103 |
| 0.3889 | 527.27 | 5800 | 0.8280 | 0.6087 | 0.6085 |
| 0.3858 | 545.45 | 6000 | 0.8449 | 0.6102 | 0.6121 |
| 0.381 | 563.64 | 6200 | 0.8577 | 0.6134 | 0.6135 |
| 0.3803 | 581.82 | 6400 | 0.8374 | 0.6038 | 0.6031 |
| 0.3754 | 600.0 | 6600 | 0.8571 | 0.6062 | 0.6056 |
| 0.3735 | 618.18 | 6800 | 0.8741 | 0.6000 | 0.5991 |
| 0.3687 | 636.36 | 7000 | 0.8457 | 0.6034 | 0.6027 |
| 0.3673 | 654.55 | 7200 | 0.8713 | 0.6015 | 0.6009 |
| 0.3645 | 672.73 | 7400 | 0.8648 | 0.6028 | 0.6020 |
| 0.3612 | 690.91 | 7600 | 0.8581 | 0.6010 | 0.6002 |
| 0.3612 | 709.09 | 7800 | 0.8585 | 0.6010 | 0.6002 |
| 0.3583 | 727.27 | 8000 | 0.8635 | 0.6035 | 0.6027 |
| 0.3571 | 745.45 | 8200 | 0.8830 | 0.5998 | 0.5991 |
| 0.3538 | 763.64 | 8400 | 0.8784 | 0.6021 | 0.6017 |
| 0.3528 | 781.82 | 8600 | 0.8700 | 0.6025 | 0.6020 |
| 0.3504 | 800.0 | 8800 | 0.8843 | 0.6059 | 0.6053 |
| 0.3489 | 818.18 | 9000 | 0.8876 | 0.6031 | 0.6027 |
| 0.3506 | 836.36 | 9200 | 0.8866 | 0.6069 | 0.6063 |
| 0.3491 | 854.55 | 9400 | 0.8799 | 0.6032 | 0.6027 |
| 0.3473 | 872.73 | 9600 | 0.8873 | 0.6054 | 0.6049 |
| 0.3469 | 890.91 | 9800 | 0.8868 | 0.6043 | 0.6038 |
| 0.3469 | 909.09 | 10000 | 0.8866 | 0.6062 | 0.6056 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_65536_512_47M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_65536_512_47M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-04-17T03:17:56+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H3K9ac-seqsight\_65536\_512\_47M-L32\_all
===================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H3K9ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8796
* F1 Score: 0.6063
* Accuracy: 0.6056
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# DavidAU/starcoder2-3b-instruct-Q6_K-GGUF
This model was converted to GGUF format from [`TechxGenus/starcoder2-3b-instruct`](https://huggingface.co/TechxGenus/starcoder2-3b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TechxGenus/starcoder2-3b-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/starcoder2-3b-instruct-Q6_K-GGUF --model starcoder2-3b-instruct.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/starcoder2-3b-instruct-Q6_K-GGUF --model starcoder2-3b-instruct.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m starcoder2-3b-instruct.Q6_K.gguf -n 128
```
| {"license": "bigcode-openrail-m", "library_name": "transformers", "tags": ["code", "starcoder2", "llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation"} | DavidAU/starcoder2-3b-instruct-Q6_K-GGUF | null | [
"transformers",
"gguf",
"code",
"starcoder2",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"license:bigcode-openrail-m",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:18:04+00:00 | [] | [] | TAGS
#transformers #gguf #code #starcoder2 #llama-cpp #gguf-my-repo #text-generation #license-bigcode-openrail-m #endpoints_compatible #region-us
|
# DavidAU/starcoder2-3b-instruct-Q6_K-GGUF
This model was converted to GGUF format from 'TechxGenus/starcoder2-3b-instruct' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/starcoder2-3b-instruct-Q6_K-GGUF\nThis model was converted to GGUF format from 'TechxGenus/starcoder2-3b-instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #code #starcoder2 #llama-cpp #gguf-my-repo #text-generation #license-bigcode-openrail-m #endpoints_compatible #region-us \n",
"# DavidAU/starcoder2-3b-instruct-Q6_K-GGUF\nThis model was converted to GGUF format from 'TechxGenus/starcoder2-3b-instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASRr
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["minds14"], "base_model": "facebook/wav2vec2-base", "model-index": [{"name": "ASRr", "results": []}]} | Hemg/ASRr | null | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:18:24+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-minds14 #base_model-facebook/wav2vec2-base #license-apache-2.0 #endpoints_compatible #region-us
|
# ASRr
This model is a fine-tuned version of facebook/wav2vec2-base on the minds14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# ASRr\n\nThis model is a fine-tuned version of facebook/wav2vec2-base on the minds14 dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-minds14 #base_model-facebook/wav2vec2-base #license-apache-2.0 #endpoints_compatible #region-us \n",
"# ASRr\n\nThis model is a fine-tuned version of facebook/wav2vec2-base on the minds14 dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null | null |
# DavidAU/Mistral-3B-Instruct-v0.2-init-Q6_K-GGUF
This model was converted to GGUF format from [`Aryanne/Mistral-3B-Instruct-v0.2-init`](https://huggingface.co/Aryanne/Mistral-3B-Instruct-v0.2-init) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Aryanne/Mistral-3B-Instruct-v0.2-init) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Mistral-3B-Instruct-v0.2-init-Q6_K-GGUF --model mistral-3b-instruct-v0.2-init.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Mistral-3B-Instruct-v0.2-init-Q6_K-GGUF --model mistral-3b-instruct-v0.2-init.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-3b-instruct-v0.2-init.Q6_K.gguf -n 128
```
| {"license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"], "inference": false} | DavidAU/Mistral-3B-Instruct-v0.2-init-Q6_K-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"region:us"
] | null | 2024-04-17T03:20:19+00:00 | [] | [] | TAGS
#gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us
|
# DavidAU/Mistral-3B-Instruct-v0.2-init-Q6_K-GGUF
This model was converted to GGUF format from 'Aryanne/Mistral-3B-Instruct-v0.2-init' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Mistral-3B-Instruct-v0.2-init-Q6_K-GGUF\nThis model was converted to GGUF format from 'Aryanne/Mistral-3B-Instruct-v0.2-init' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us \n",
"# DavidAU/Mistral-3B-Instruct-v0.2-init-Q6_K-GGUF\nThis model was converted to GGUF format from 'Aryanne/Mistral-3B-Instruct-v0.2-init' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | null |
# DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v0-Q8_0-GGUF
This model was converted to GGUF format from [`cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v0`](https://huggingface.co/cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v0-Q8_0-GGUF --model tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v0.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v0-Q8_0-GGUF --model tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v0.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v0.Q8_0.gguf -n 128
```
| {"language": ["pt", "en"], "license": "mit", "tags": ["llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation"} | DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v0-Q8_0-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"pt",
"en",
"license:mit",
"region:us"
] | null | 2024-04-17T03:20:52+00:00 | [] | [
"pt",
"en"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #text-generation #pt #en #license-mit #region-us
|
# DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v0-Q8_0-GGUF
This model was converted to GGUF format from 'cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v0' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v0-Q8_0-GGUF\nThis model was converted to GGUF format from 'cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v0' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #text-generation #pt #en #license-mit #region-us \n",
"# DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v0-Q8_0-GGUF\nThis model was converted to GGUF format from 'cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v0' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | null |
# DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1-Q8_0-GGUF
This model was converted to GGUF format from [`cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1`](https://huggingface.co/cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1-Q8_0-GGUF --model tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v1.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1-Q8_0-GGUF --model tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v1.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v1.Q8_0.gguf -n 128
```
| {"language": ["pt", "en"], "license": "mit", "tags": ["llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation"} | DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1-Q8_0-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"pt",
"en",
"license:mit",
"region:us"
] | null | 2024-04-17T03:21:19+00:00 | [] | [
"pt",
"en"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #text-generation #pt #en #license-mit #region-us
|
# DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1-Q8_0-GGUF
This model was converted to GGUF format from 'cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1-Q8_0-GGUF\nThis model was converted to GGUF format from 'cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #text-generation #pt #en #license-mit #region-us \n",
"# DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1-Q8_0-GGUF\nThis model was converted to GGUF format from 'cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | null |
# DavidAU/Mistral-3B-Instruct-v0.2-init-Q8_0-GGUF
This model was converted to GGUF format from [`Aryanne/Mistral-3B-Instruct-v0.2-init`](https://huggingface.co/Aryanne/Mistral-3B-Instruct-v0.2-init) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Aryanne/Mistral-3B-Instruct-v0.2-init) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Mistral-3B-Instruct-v0.2-init-Q8_0-GGUF --model mistral-3b-instruct-v0.2-init.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Mistral-3B-Instruct-v0.2-init-Q8_0-GGUF --model mistral-3b-instruct-v0.2-init.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-3b-instruct-v0.2-init.Q8_0.gguf -n 128
```
| {"license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"], "inference": false} | DavidAU/Mistral-3B-Instruct-v0.2-init-Q8_0-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"region:us"
] | null | 2024-04-17T03:21:47+00:00 | [] | [] | TAGS
#gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us
|
# DavidAU/Mistral-3B-Instruct-v0.2-init-Q8_0-GGUF
This model was converted to GGUF format from 'Aryanne/Mistral-3B-Instruct-v0.2-init' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Mistral-3B-Instruct-v0.2-init-Q8_0-GGUF\nThis model was converted to GGUF format from 'Aryanne/Mistral-3B-Instruct-v0.2-init' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us \n",
"# DavidAU/Mistral-3B-Instruct-v0.2-init-Q8_0-GGUF\nThis model was converted to GGUF format from 'Aryanne/Mistral-3B-Instruct-v0.2-init' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | null |
# DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2-Q8_0-GGUF
This model was converted to GGUF format from [`cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2`](https://huggingface.co/cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2-Q8_0-GGUF --model tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v2.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2-Q8_0-GGUF --model tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v2.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v2.Q8_0.gguf -n 128
```
| {"language": ["pt", "en"], "license": "mit", "tags": ["llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation"} | DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2-Q8_0-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"pt",
"en",
"license:mit",
"region:us"
] | null | 2024-04-17T03:22:09+00:00 | [] | [
"pt",
"en"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #text-generation #pt #en #license-mit #region-us
|
# DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2-Q8_0-GGUF
This model was converted to GGUF format from 'cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2-Q8_0-GGUF\nThis model was converted to GGUF format from 'cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #text-generation #pt #en #license-mit #region-us \n",
"# DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2-Q8_0-GGUF\nThis model was converted to GGUF format from 'cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | null |
# DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k-Q8_0-GGUF
This model was converted to GGUF format from [`cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k`](https://huggingface.co/cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k-Q8_0-GGUF --model tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v3-8k.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k-Q8_0-GGUF --model tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v3-8k.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v3-8k.Q8_0.gguf -n 128
```
| {"language": ["pt", "en"], "license": "mit", "tags": ["llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation", "widget": [{"text": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction: \nSua instru\u00e7\u00e3o aqui\n\n### Response:\n"}]} | DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k-Q8_0-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"pt",
"en",
"license:mit",
"region:us"
] | null | 2024-04-17T03:22:24+00:00 | [] | [
"pt",
"en"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #text-generation #pt #en #license-mit #region-us
|
# DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k-Q8_0-GGUF
This model was converted to GGUF format from 'cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k-Q8_0-GGUF\nThis model was converted to GGUF format from 'cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #text-generation #pt #en #license-mit #region-us \n",
"# DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k-Q8_0-GGUF\nThis model was converted to GGUF format from 'cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | null |
# DavidAU/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-Q8_0-GGUF
This model was converted to GGUF format from [`habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1`](https://huggingface.co/habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-Q8_0-GGUF --model tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-Q8_0-GGUF --model tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-1.1b-step-2t-lr-5-5ep-oasst1-top1-instruct-v1.Q8_0.gguf -n 128
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["OpenAssistant/oasst_top1_2023-08-25"], "base_model": "TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T", "pipeline_tag": "text-generation"} | DavidAU/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-Q8_0-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T",
"license:apache-2.0",
"region:us"
] | null | 2024-04-17T03:22:37+00:00 | [] | [
"en"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #text-generation #en #dataset-OpenAssistant/oasst_top1_2023-08-25 #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T #license-apache-2.0 #region-us
|
# DavidAU/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-Q8_0-GGUF
This model was converted to GGUF format from 'habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-Q8_0-GGUF\nThis model was converted to GGUF format from 'habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #text-generation #en #dataset-OpenAssistant/oasst_top1_2023-08-25 #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T #license-apache-2.0 #region-us \n",
"# DavidAU/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1-Q8_0-GGUF\nThis model was converted to GGUF format from 'habanoz/TinyLlama-1.1B-step-2T-lr-5-5ep-oasst1-top1-instruct-V1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | null |
# DavidAU/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1-Q8_0-GGUF
This model was converted to GGUF format from [`habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1`](https://huggingface.co/habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1-Q8_0-GGUF --model tinyllama-1.1b-2t-lr-2e-4-3ep-dolly-15k-instruct-v1.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1-Q8_0-GGUF --model tinyllama-1.1b-2t-lr-2e-4-3ep-dolly-15k-instruct-v1.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-1.1b-2t-lr-2e-4-3ep-dolly-15k-instruct-v1.Q8_0.gguf -n 128
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["databricks/databricks-dolly-15k"], "base_model": "TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T", "pipeline_tag": "text-generation"} | DavidAU/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1-Q8_0-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:databricks/databricks-dolly-15k",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T",
"license:apache-2.0",
"region:us"
] | null | 2024-04-17T03:23:01+00:00 | [] | [
"en"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #text-generation #en #dataset-databricks/databricks-dolly-15k #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T #license-apache-2.0 #region-us
|
# DavidAU/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1-Q8_0-GGUF
This model was converted to GGUF format from 'habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1-Q8_0-GGUF\nThis model was converted to GGUF format from 'habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #text-generation #en #dataset-databricks/databricks-dolly-15k #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T #license-apache-2.0 #region-us \n",
"# DavidAU/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1-Q8_0-GGUF\nThis model was converted to GGUF format from 'habanoz/TinyLlama-1.1B-2T-lr-2e-4-3ep-dolly-15k-instruct-v1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/tamimhasanbhuiyan/huggingface/runs/qiuhht9t)
# MMS-Adapter-Testing
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9386
- Wer: 0.6378
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 5.768 | 0.0150 | 100 | 2.1552 | 0.9390 |
| 2.3489 | 0.0299 | 200 | 1.1714 | 0.7209 |
| 1.7924 | 0.0449 | 300 | 1.1720 | 0.7586 |
| 1.7483 | 0.0598 | 400 | 1.0868 | 0.7237 |
| 1.8404 | 0.0748 | 500 | 1.0824 | 0.6963 |
| 1.8122 | 0.0897 | 600 | 1.0771 | 0.6866 |
| 1.7504 | 0.1047 | 700 | 1.0705 | 0.6970 |
| 1.6675 | 0.1196 | 800 | 1.0688 | 0.6913 |
| 1.6123 | 0.1346 | 900 | 1.0446 | 0.6888 |
| 1.6237 | 0.1495 | 1000 | 1.0586 | 0.7034 |
| 1.6714 | 0.1645 | 1100 | 1.0562 | 0.6866 |
| 1.8129 | 0.1795 | 1200 | 1.0363 | 0.6891 |
| 1.7839 | 0.1944 | 1300 | 1.0374 | 0.6631 |
| 1.7305 | 0.2094 | 1400 | 1.0211 | 0.6834 |
| 1.5496 | 0.2243 | 1500 | 1.0225 | 0.6856 |
| 1.5106 | 0.2393 | 1600 | 1.0387 | 0.7127 |
| 1.7517 | 0.2542 | 1700 | 1.0561 | 0.6898 |
| 1.7117 | 0.2692 | 1800 | 1.0303 | 0.6866 |
| 1.6854 | 0.2841 | 1900 | 1.0240 | 0.6888 |
| 1.5186 | 0.2991 | 2000 | 1.0207 | 0.6873 |
| 1.5631 | 0.3140 | 2100 | 0.9964 | 0.6677 |
| 1.6909 | 0.3290 | 2200 | 1.0090 | 0.6738 |
| 1.5698 | 0.3440 | 2300 | 1.0016 | 0.6809 |
| 1.6702 | 0.3589 | 2400 | 0.9996 | 0.6749 |
| 1.628 | 0.3739 | 2500 | 1.0074 | 0.6699 |
| 1.8025 | 0.3888 | 2600 | 1.0312 | 0.6934 |
| 1.5986 | 0.4038 | 2700 | 0.9871 | 0.6667 |
| 1.5687 | 0.4187 | 2800 | 0.9893 | 0.6567 |
| 1.6444 | 0.4337 | 2900 | 0.9943 | 0.6674 |
| 1.5869 | 0.4486 | 3000 | 0.9831 | 0.6706 |
| 1.443 | 0.4636 | 3100 | 1.0192 | 0.7045 |
| 1.569 | 0.4785 | 3200 | 0.9783 | 0.6635 |
| 1.5302 | 0.4935 | 3300 | 0.9898 | 0.6727 |
| 1.5879 | 0.5084 | 3400 | 0.9773 | 0.6670 |
| 1.5739 | 0.5234 | 3500 | 0.9837 | 0.6895 |
| 1.5684 | 0.5384 | 3600 | 0.9836 | 0.6667 |
| 1.6397 | 0.5533 | 3700 | 0.9673 | 0.6578 |
| 1.5639 | 0.5683 | 3800 | 0.9888 | 0.6599 |
| 1.6773 | 0.5832 | 3900 | 0.9788 | 0.6613 |
| 1.5069 | 0.5982 | 4000 | 0.9801 | 0.6542 |
| 1.4801 | 0.6131 | 4100 | 0.9587 | 0.6545 |
| 1.7308 | 0.6281 | 4200 | 0.9599 | 0.6706 |
| 1.4852 | 0.6430 | 4300 | 0.9728 | 0.6663 |
| 1.4654 | 0.6580 | 4400 | 0.9468 | 0.6417 |
| 1.801 | 0.6729 | 4500 | 0.9591 | 0.6556 |
| 2.0928 | 0.6879 | 4600 | 0.9857 | 0.6670 |
| 1.561 | 0.7029 | 4700 | 0.9550 | 0.6503 |
| 1.6623 | 0.7178 | 4800 | 0.9587 | 0.6524 |
| 1.5252 | 0.7328 | 4900 | 0.9551 | 0.6531 |
| 1.5539 | 0.7477 | 5000 | 0.9660 | 0.6513 |
| 1.5571 | 0.7627 | 5100 | 0.9557 | 0.6531 |
| 1.6584 | 0.7776 | 5200 | 0.9649 | 0.6563 |
| 1.5072 | 0.7926 | 5300 | 0.9604 | 0.6481 |
| 1.5362 | 0.8075 | 5400 | 0.9457 | 0.6314 |
| 1.4772 | 0.8225 | 5500 | 0.9491 | 0.6449 |
| 1.3731 | 0.8374 | 5600 | 0.9609 | 0.6478 |
| 1.5795 | 0.8524 | 5700 | 0.9568 | 0.6567 |
| 1.4013 | 0.8674 | 5800 | 0.9457 | 0.6406 |
| 1.5817 | 0.8823 | 5900 | 0.9437 | 0.6513 |
| 1.4211 | 0.8973 | 6000 | 0.9433 | 0.6381 |
| 1.4341 | 0.9122 | 6100 | 0.9420 | 0.6353 |
| 1.4818 | 0.9272 | 6200 | 0.9407 | 0.6456 |
| 1.5241 | 0.9421 | 6300 | 0.9400 | 0.6381 |
| 1.575 | 0.9571 | 6400 | 0.9374 | 0.6392 |
| 1.5232 | 0.9720 | 6500 | 0.9385 | 0.6364 |
| 1.8634 | 0.9870 | 6600 | 0.9386 | 0.6378 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.19.1
| {"license": "cc-by-nc-4.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "facebook/mms-1b-all", "model-index": [{"name": "MMS-Adapter-Testing", "results": []}]} | tanvirsaad/MMS-Adapter-Testing | null | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:23:35+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #base_model-facebook/mms-1b-all #license-cc-by-nc-4.0 #endpoints_compatible #region-us
| <img src="URL alt="Visualize in Weights & Biases" width="200" height="32"/>
MMS-Adapter-Testing
===================
This model is a fine-tuned version of facebook/mms-1b-all on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9386
* Wer: 0.6378
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.001
* train\_batch\_size: 2
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 10
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.41.0.dev0
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 10\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #base_model-facebook/mms-1b-all #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 10\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.41.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | voidful/phi-2_base | null | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T03:23:56+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me3-seqsight_65536_512_47M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7760
- F1 Score: 0.5463
- Accuracy: 0.5462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6888 | 13.33 | 200 | 0.6866 | 0.5346 | 0.5473 |
| 0.6573 | 26.67 | 400 | 0.7102 | 0.5395 | 0.5397 |
| 0.6362 | 40.0 | 600 | 0.7201 | 0.5454 | 0.5454 |
| 0.6149 | 53.33 | 800 | 0.7190 | 0.5469 | 0.5484 |
| 0.6003 | 66.67 | 1000 | 0.7464 | 0.5434 | 0.5432 |
| 0.5922 | 80.0 | 1200 | 0.7492 | 0.5473 | 0.5473 |
| 0.5846 | 93.33 | 1400 | 0.7471 | 0.5473 | 0.5473 |
| 0.5786 | 106.67 | 1600 | 0.7631 | 0.5450 | 0.5446 |
| 0.5738 | 120.0 | 1800 | 0.7509 | 0.5497 | 0.5497 |
| 0.5698 | 133.33 | 2000 | 0.7813 | 0.5469 | 0.5465 |
| 0.5673 | 146.67 | 2200 | 0.7637 | 0.5473 | 0.5476 |
| 0.5628 | 160.0 | 2400 | 0.7770 | 0.5493 | 0.5533 |
| 0.5599 | 173.33 | 2600 | 0.7597 | 0.5508 | 0.5522 |
| 0.5562 | 186.67 | 2800 | 0.7625 | 0.5496 | 0.5492 |
| 0.5542 | 200.0 | 3000 | 0.7574 | 0.5455 | 0.5470 |
| 0.5493 | 213.33 | 3200 | 0.7800 | 0.5483 | 0.5486 |
| 0.5468 | 226.67 | 3400 | 0.7711 | 0.5535 | 0.5533 |
| 0.5418 | 240.0 | 3600 | 0.7764 | 0.5503 | 0.55 |
| 0.5382 | 253.33 | 3800 | 0.7932 | 0.5486 | 0.5508 |
| 0.5339 | 266.67 | 4000 | 0.7707 | 0.5549 | 0.5549 |
| 0.5293 | 280.0 | 4200 | 0.7786 | 0.5532 | 0.5557 |
| 0.5235 | 293.33 | 4400 | 0.8078 | 0.5526 | 0.5524 |
| 0.5197 | 306.67 | 4600 | 0.8077 | 0.5499 | 0.55 |
| 0.5144 | 320.0 | 4800 | 0.8303 | 0.5499 | 0.55 |
| 0.5098 | 333.33 | 5000 | 0.7973 | 0.5467 | 0.5489 |
| 0.505 | 346.67 | 5200 | 0.8198 | 0.5443 | 0.5446 |
| 0.501 | 360.0 | 5400 | 0.8228 | 0.5418 | 0.5424 |
| 0.4966 | 373.33 | 5600 | 0.8168 | 0.5482 | 0.5486 |
| 0.4943 | 386.67 | 5800 | 0.8075 | 0.5468 | 0.5465 |
| 0.4902 | 400.0 | 6000 | 0.8198 | 0.5493 | 0.5489 |
| 0.4858 | 413.33 | 6200 | 0.8412 | 0.5462 | 0.5462 |
| 0.4828 | 426.67 | 6400 | 0.8333 | 0.5427 | 0.5429 |
| 0.4797 | 440.0 | 6600 | 0.8318 | 0.5448 | 0.5448 |
| 0.4771 | 453.33 | 6800 | 0.8289 | 0.5514 | 0.5511 |
| 0.4737 | 466.67 | 7000 | 0.8565 | 0.5445 | 0.5446 |
| 0.4713 | 480.0 | 7200 | 0.8452 | 0.5496 | 0.5492 |
| 0.4674 | 493.33 | 7400 | 0.8395 | 0.5467 | 0.5467 |
| 0.4666 | 506.67 | 7600 | 0.8330 | 0.5509 | 0.5508 |
| 0.4643 | 520.0 | 7800 | 0.8519 | 0.5471 | 0.5481 |
| 0.4618 | 533.33 | 8000 | 0.8503 | 0.5506 | 0.5503 |
| 0.4593 | 546.67 | 8200 | 0.8429 | 0.5530 | 0.5527 |
| 0.4585 | 560.0 | 8400 | 0.8681 | 0.5471 | 0.5473 |
| 0.4575 | 573.33 | 8600 | 0.8624 | 0.5487 | 0.5489 |
| 0.4558 | 586.67 | 8800 | 0.8618 | 0.5499 | 0.5495 |
| 0.4542 | 600.0 | 9000 | 0.8750 | 0.5507 | 0.5503 |
| 0.4535 | 613.33 | 9200 | 0.8518 | 0.5461 | 0.5465 |
| 0.4509 | 626.67 | 9400 | 0.8610 | 0.5465 | 0.5462 |
| 0.4508 | 640.0 | 9600 | 0.8641 | 0.5492 | 0.5489 |
| 0.4508 | 653.33 | 9800 | 0.8607 | 0.5510 | 0.5505 |
| 0.4489 | 666.67 | 10000 | 0.8660 | 0.5470 | 0.5467 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3K4me3-seqsight_65536_512_47M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me3-seqsight_65536_512_47M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-04-17T03:24:11+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H3K4me3-seqsight\_65536\_512\_47M-L32\_all
====================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7760
* F1 Score: 0.5463
* Accuracy: 0.5462
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | null |
# DavidAU/TinyLlama-1.1B-Instruct-3T-Q8_0-GGUF
This model was converted to GGUF format from [`gardner/TinyLlama-1.1B-Instruct-3T`](https://huggingface.co/gardner/TinyLlama-1.1B-Instruct-3T) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/gardner/TinyLlama-1.1B-Instruct-3T) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/TinyLlama-1.1B-Instruct-3T-Q8_0-GGUF --model tinyllama-1.1b-instruct-3t.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/TinyLlama-1.1B-Instruct-3T-Q8_0-GGUF --model tinyllama-1.1b-instruct-3t.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-1.1b-instruct-3t.Q8_0.gguf -n 128
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["instruct", "openhermes", "tinyllama", "llama-cpp", "gguf-my-repo"], "datasets": ["teknium/openhermes"], "metrics": ["metric1", "metric2"], "base_model": "TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "thumbnail": "url to a thumbnail used in social sharing"} | DavidAU/TinyLlama-1.1B-Instruct-3T-Q8_0-GGUF | null | [
"gguf",
"instruct",
"openhermes",
"tinyllama",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:teknium/openhermes",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"license:apache-2.0",
"region:us"
] | null | 2024-04-17T03:24:14+00:00 | [] | [
"en"
] | TAGS
#gguf #instruct #openhermes #tinyllama #llama-cpp #gguf-my-repo #en #dataset-teknium/openhermes #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T #license-apache-2.0 #region-us
|
# DavidAU/TinyLlama-1.1B-Instruct-3T-Q8_0-GGUF
This model was converted to GGUF format from 'gardner/TinyLlama-1.1B-Instruct-3T' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/TinyLlama-1.1B-Instruct-3T-Q8_0-GGUF\nThis model was converted to GGUF format from 'gardner/TinyLlama-1.1B-Instruct-3T' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #instruct #openhermes #tinyllama #llama-cpp #gguf-my-repo #en #dataset-teknium/openhermes #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T #license-apache-2.0 #region-us \n",
"# DavidAU/TinyLlama-1.1B-Instruct-3T-Q8_0-GGUF\nThis model was converted to GGUF format from 'gardner/TinyLlama-1.1B-Instruct-3T' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | null |
# DavidAU/TinyLlama-1.1B-Telugu-Romanization-v0-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`Telugu-LLM-Labs/TinyLlama-1.1B-Telugu-Romanization-v0-Instruct`](https://huggingface.co/Telugu-LLM-Labs/TinyLlama-1.1B-Telugu-Romanization-v0-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Telugu-LLM-Labs/TinyLlama-1.1B-Telugu-Romanization-v0-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/TinyLlama-1.1B-Telugu-Romanization-v0-Instruct-Q8_0-GGUF --model tinyllama-1.1b-telugu-romanization-v0-instruct.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/TinyLlama-1.1B-Telugu-Romanization-v0-Instruct-Q8_0-GGUF --model tinyllama-1.1b-telugu-romanization-v0-instruct.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-1.1b-telugu-romanization-v0-instruct.Q8_0.gguf -n 128
```
| {"tags": ["llama-cpp", "gguf-my-repo"]} | DavidAU/TinyLlama-1.1B-Telugu-Romanization-v0-Instruct-Q8_0-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"region:us"
] | null | 2024-04-17T03:25:32+00:00 | [] | [] | TAGS
#gguf #llama-cpp #gguf-my-repo #region-us
|
# DavidAU/TinyLlama-1.1B-Telugu-Romanization-v0-Instruct-Q8_0-GGUF
This model was converted to GGUF format from 'Telugu-LLM-Labs/TinyLlama-1.1B-Telugu-Romanization-v0-Instruct' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/TinyLlama-1.1B-Telugu-Romanization-v0-Instruct-Q8_0-GGUF\nThis model was converted to GGUF format from 'Telugu-LLM-Labs/TinyLlama-1.1B-Telugu-Romanization-v0-Instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #region-us \n",
"# DavidAU/TinyLlama-1.1B-Telugu-Romanization-v0-Instruct-Q8_0-GGUF\nThis model was converted to GGUF format from 'Telugu-LLM-Labs/TinyLlama-1.1B-Telugu-Romanization-v0-Instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | transformers |
# DavidAU/DPO-malaysian-tinyllama-1.1b-16k-instructions-v3-Q8_0-GGUF
This model was converted to GGUF format from [`mesolitica/DPO-malaysian-tinyllama-1.1b-16k-instructions-v3`](https://huggingface.co/mesolitica/DPO-malaysian-tinyllama-1.1b-16k-instructions-v3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mesolitica/DPO-malaysian-tinyllama-1.1b-16k-instructions-v3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/DPO-malaysian-tinyllama-1.1b-16k-instructions-v3-Q8_0-GGUF --model dpo-malaysian-tinyllama-1.1b-16k-instructions-v3.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/DPO-malaysian-tinyllama-1.1b-16k-instructions-v3-Q8_0-GGUF --model dpo-malaysian-tinyllama-1.1b-16k-instructions-v3.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m dpo-malaysian-tinyllama-1.1b-16k-instructions-v3.Q8_0.gguf -n 128
```
| {"library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"]} | DavidAU/DPO-malaysian-tinyllama-1.1b-16k-instructions-v3-Q8_0-GGUF | null | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:25:56+00:00 | [] | [] | TAGS
#transformers #gguf #llama-cpp #gguf-my-repo #endpoints_compatible #region-us
|
# DavidAU/DPO-malaysian-tinyllama-1.1b-16k-instructions-v3-Q8_0-GGUF
This model was converted to GGUF format from 'mesolitica/DPO-malaysian-tinyllama-1.1b-16k-instructions-v3' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/DPO-malaysian-tinyllama-1.1b-16k-instructions-v3-Q8_0-GGUF\nThis model was converted to GGUF format from 'mesolitica/DPO-malaysian-tinyllama-1.1b-16k-instructions-v3' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #endpoints_compatible #region-us \n",
"# DavidAU/DPO-malaysian-tinyllama-1.1b-16k-instructions-v3-Q8_0-GGUF\nThis model was converted to GGUF format from 'mesolitica/DPO-malaysian-tinyllama-1.1b-16k-instructions-v3' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "THUDM/chatglm2-6b"} | gkMSDA/PEFTAdapterWeightsTest | null | [
"peft",
"arxiv:1910.09700",
"base_model:THUDM/chatglm2-6b",
"region:us"
] | null | 2024-04-17T03:26:09+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-THUDM/chatglm2-6b #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-THUDM/chatglm2-6b #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mental_Health_Counseling
This model is a fine-tuned version of NousResearch/Llama-2-7b-chat-hf on a mental health counseling conversation dataset
## Model description
You can enter your mental health issues and model will give the appropriate advices.
## Intended uses & limitations
Can be used an a mental health counsellor.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Framework versions
- Transformers 4.33.1
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.13.3
| {"tags": ["generated_from_trainer"], "base_model": "NousResearch/Llama-2-7b-chat-hf", "model-index": [{"name": "Mental_Health_Counseling", "results": []}]} | SiddharthShukla48/Mental_Health_Counseling | null | [
"generated_from_trainer",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-17T03:26:11+00:00 | [] | [] | TAGS
#generated_from_trainer #base_model-NousResearch/Llama-2-7b-chat-hf #region-us
|
# Mental_Health_Counseling
This model is a fine-tuned version of NousResearch/Llama-2-7b-chat-hf on a mental health counseling conversation dataset
## Model description
You can enter your mental health issues and model will give the appropriate advices.
## Intended uses & limitations
Can be used an a mental health counsellor.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Framework versions
- Transformers 4.33.1
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.13.3
| [
"# Mental_Health_Counseling\n\nThis model is a fine-tuned version of NousResearch/Llama-2-7b-chat-hf on a mental health counseling conversation dataset",
"## Model description\n\nYou can enter your mental health issues and model will give the appropriate advices.",
"## Intended uses & limitations\n\nCan be used an a mental health counsellor.",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1",
"### Framework versions\n\n- Transformers 4.33.1\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.13.3"
] | [
"TAGS\n#generated_from_trainer #base_model-NousResearch/Llama-2-7b-chat-hf #region-us \n",
"# Mental_Health_Counseling\n\nThis model is a fine-tuned version of NousResearch/Llama-2-7b-chat-hf on a mental health counseling conversation dataset",
"## Model description\n\nYou can enter your mental health issues and model will give the appropriate advices.",
"## Intended uses & limitations\n\nCan be used an a mental health counsellor.",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1",
"### Framework versions\n\n- Transformers 4.33.1\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.13.3"
] |
null | null |
# DavidAU/TinyLlama-1.1B-32k-Instruct-NodeSelector-Q8_0-GGUF
This model was converted to GGUF format from [`ozayezerceli/TinyLlama-1.1B-32k-Instruct-NodeSelector`](https://huggingface.co/ozayezerceli/TinyLlama-1.1B-32k-Instruct-NodeSelector) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ozayezerceli/TinyLlama-1.1B-32k-Instruct-NodeSelector) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/TinyLlama-1.1B-32k-Instruct-NodeSelector-Q8_0-GGUF --model tinyllama-1.1b-32k-instruct-nodeselector.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/TinyLlama-1.1B-32k-Instruct-NodeSelector-Q8_0-GGUF --model tinyllama-1.1b-32k-instruct-nodeselector.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-1.1b-32k-instruct-nodeselector.Q8_0.gguf -n 128
```
| {"language": ["en", "tr"], "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["ozayezerceli/NodeSelectionDataset"]} | DavidAU/TinyLlama-1.1B-32k-Instruct-NodeSelector-Q8_0-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"tr",
"dataset:ozayezerceli/NodeSelectionDataset",
"license:apache-2.0",
"region:us"
] | null | 2024-04-17T03:26:49+00:00 | [] | [
"en",
"tr"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #en #tr #dataset-ozayezerceli/NodeSelectionDataset #license-apache-2.0 #region-us
|
# DavidAU/TinyLlama-1.1B-32k-Instruct-NodeSelector-Q8_0-GGUF
This model was converted to GGUF format from 'ozayezerceli/TinyLlama-1.1B-32k-Instruct-NodeSelector' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/TinyLlama-1.1B-32k-Instruct-NodeSelector-Q8_0-GGUF\nThis model was converted to GGUF format from 'ozayezerceli/TinyLlama-1.1B-32k-Instruct-NodeSelector' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #en #tr #dataset-ozayezerceli/NodeSelectionDataset #license-apache-2.0 #region-us \n",
"# DavidAU/TinyLlama-1.1B-32k-Instruct-NodeSelector-Q8_0-GGUF\nThis model was converted to GGUF format from 'ozayezerceli/TinyLlama-1.1B-32k-Instruct-NodeSelector' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers |
# DavidAU/Tiny-llamix_2x1B-Q8_0-GGUF
This model was converted to GGUF format from [`SE6446/Tiny-llamix_2x1B`](https://huggingface.co/SE6446/Tiny-llamix_2x1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SE6446/Tiny-llamix_2x1B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Tiny-llamix_2x1B-Q8_0-GGUF --model tiny-llamix_2x1b.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Tiny-llamix_2x1B-Q8_0-GGUF --model tiny-llamix_2x1b.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tiny-llamix_2x1b.Q8_0.gguf -n 128
```
| {"license": "mit", "library_name": "transformers", "tags": ["moe", "nlp", "llama-cpp", "gguf-my-repo"], "widget": [{"text": "<|system|>\nYou are a chatbot who can help code!</s>\n<|user|>\nWrite me a function to calculate the first 10 digits of the fibonacci sequence in Python and print it out to the CLI.</s>\n<|assistant|>\n"}, {"text": "<|system|> You are penguinotron, a penguin themed chatbot who is obsessed with peguins and will make any excuse to talk about them\n<|user|>\nHello, what is a penguin?\n<|assistant|>\n"}], "pipeline_tag": "text-generation"} | DavidAU/Tiny-llamix_2x1B-Q8_0-GGUF | null | [
"transformers",
"gguf",
"moe",
"nlp",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:27:19+00:00 | [] | [] | TAGS
#transformers #gguf #moe #nlp #llama-cpp #gguf-my-repo #text-generation #license-mit #endpoints_compatible #region-us
|
# DavidAU/Tiny-llamix_2x1B-Q8_0-GGUF
This model was converted to GGUF format from 'SE6446/Tiny-llamix_2x1B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Tiny-llamix_2x1B-Q8_0-GGUF\nThis model was converted to GGUF format from 'SE6446/Tiny-llamix_2x1B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #moe #nlp #llama-cpp #gguf-my-repo #text-generation #license-mit #endpoints_compatible #region-us \n",
"# DavidAU/Tiny-llamix_2x1B-Q8_0-GGUF\nThis model was converted to GGUF format from 'SE6446/Tiny-llamix_2x1B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | transformers |
# DavidAU/HelpingAI-Lite-2x1B-Q8_0-GGUF
This model was converted to GGUF format from [`OEvortex/HelpingAI-Lite-2x1B`](https://huggingface.co/OEvortex/HelpingAI-Lite-2x1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/OEvortex/HelpingAI-Lite-2x1B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/HelpingAI-Lite-2x1B-Q8_0-GGUF --model helpingai-lite-2x1b.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/HelpingAI-Lite-2x1B-Q8_0-GGUF --model helpingai-lite-2x1b.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m helpingai-lite-2x1b.Q8_0.gguf -n 128
```
| {"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["HelpingAI", "coder", "lite", "Fine-tuned", "moe", "nlp", "llama-cpp", "gguf-my-repo"], "metrics": ["accuracy"], "base_model": "OEvortex/HelpingAI-Lite", "license_name": "hsul", "license_link": "https://huggingface.co/OEvortex/vortex-3b/raw/main/LICENSE.md"} | DavidAU/HelpingAI-Lite-2x1B-Q8_0-GGUF | null | [
"transformers",
"gguf",
"HelpingAI",
"coder",
"lite",
"Fine-tuned",
"moe",
"nlp",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:OEvortex/HelpingAI-Lite",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:27:51+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #HelpingAI #coder #lite #Fine-tuned #moe #nlp #llama-cpp #gguf-my-repo #en #base_model-OEvortex/HelpingAI-Lite #license-other #endpoints_compatible #region-us
|
# DavidAU/HelpingAI-Lite-2x1B-Q8_0-GGUF
This model was converted to GGUF format from 'OEvortex/HelpingAI-Lite-2x1B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/HelpingAI-Lite-2x1B-Q8_0-GGUF\nThis model was converted to GGUF format from 'OEvortex/HelpingAI-Lite-2x1B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #HelpingAI #coder #lite #Fine-tuned #moe #nlp #llama-cpp #gguf-my-repo #en #base_model-OEvortex/HelpingAI-Lite #license-other #endpoints_compatible #region-us \n",
"# DavidAU/HelpingAI-Lite-2x1B-Q8_0-GGUF\nThis model was converted to GGUF format from 'OEvortex/HelpingAI-Lite-2x1B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.1 | {"library_name": "peft", "base_model": "epfl-llm/meditron-7b"} | mango-sciences/Meditron_7B_0.1_Chat_finetuned_DS_v1 | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:epfl-llm/meditron-7b",
"region:us"
] | null | 2024-04-17T03:27:56+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-epfl-llm/meditron-7b #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.8.1 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.1"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-epfl-llm/meditron-7b #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.8.1"
] |
text-generation | null |
## Llamacpp Quantizations of CodeQwen1.5-7B-Chat
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> PR <a href="https://github.com/ggerganov/llama.cpp/pull/6707">6707</a> for quantization.
Original model: https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [CodeQwen1.5-7B-Chat-Q8_0.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-Q8_0.gguf) | Q8_0 | 7.70GB | Extremely high quality, generally unneeded but max available quant. |
| [CodeQwen1.5-7B-Chat-Q6_K.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-Q6_K.gguf) | Q6_K | 6.37GB | Very high quality, near perfect, *recommended*. |
| [CodeQwen1.5-7B-Chat-Q5_K_M.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-Q5_K_M.gguf) | Q5_K_M | 5.42GB | High quality, *recommended*. |
| [CodeQwen1.5-7B-Chat-Q5_K_S.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-Q5_K_S.gguf) | Q5_K_S | 5.14GB | High quality, *recommended*. |
| [CodeQwen1.5-7B-Chat-Q4_K_M.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-Q4_K_M.gguf) | Q4_K_M | 4.73GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [CodeQwen1.5-7B-Chat-Q4_K_S.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-Q4_K_S.gguf) | Q4_K_S | 4.41GB | Slightly lower quality with more space savings, *recommended*. |
| [CodeQwen1.5-7B-Chat-IQ4_NL.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-IQ4_NL.gguf) | IQ4_NL | 4.18GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [CodeQwen1.5-7B-Chat-IQ4_XS.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-IQ4_XS.gguf) | IQ4_XS | 4.03GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [CodeQwen1.5-7B-Chat-Q3_K_L.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-Q3_K_L.gguf) | Q3_K_L | 3.98GB | Lower quality but usable, good for low RAM availability. |
| [CodeQwen1.5-7B-Chat-Q3_K_M.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-Q3_K_M.gguf) | Q3_K_M | 3.80GB | Even lower quality. |
| [CodeQwen1.5-7B-Chat-IQ3_M.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-IQ3_M.gguf) | IQ3_M | 3.60GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [CodeQwen1.5-7B-Chat-IQ3_S.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-IQ3_S.gguf) | IQ3_S | 3.50GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [CodeQwen1.5-7B-Chat-Q3_K_S.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-Q3_K_S.gguf) | Q3_K_S | 3.50GB | Low quality, not recommended. |
| [CodeQwen1.5-7B-Chat-IQ3_XS.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-IQ3_XS.gguf) | IQ3_XS | 3.35GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [CodeQwen1.5-7B-Chat-IQ3_XXS.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-IQ3_XXS.gguf) | IQ3_XXS | 3.22GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [CodeQwen1.5-7B-Chat-Q2_K.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-Q2_K.gguf) | Q2_K | 3.05GB | Very low quality but surprisingly usable. |
| [CodeQwen1.5-7B-Chat-IQ2_M.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-IQ2_M.gguf) | IQ2_M | 3.00GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [CodeQwen1.5-7B-Chat-IQ2_S.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-IQ2_S.gguf) | IQ2_S | 2.87GB | Very low quality, uses SOTA techniques to be usable. |
| [CodeQwen1.5-7B-Chat-IQ2_XS.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-IQ2_XS.gguf) | IQ2_XS | 2.76GB | Very low quality, uses SOTA techniques to be usable. |
| [CodeQwen1.5-7B-Chat-IQ2_XXS.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-IQ2_XXS.gguf) | IQ2_XXS | 2.61GB | Lower quality, uses SOTA techniques to be usable. |
| [CodeQwen1.5-7B-Chat-IQ1_M.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-IQ1_M.gguf) | IQ1_M | 2.45GB | Extremely low quality, *not* recommended. |
| [CodeQwen1.5-7B-Chat-IQ1_S.gguf](https://huggingface.co/bartowski/CodeQwen1.5-7B-Chat-GGUF/blob/main/CodeQwen1.5-7B-Chat-IQ1_S.gguf) | IQ1_S | 2.36GB | Extremely low quality, *not* recommended. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
| {"language": ["en"], "license": "other", "tags": ["chat"], "license_name": "tongyi-qianwen", "license_link": "https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat/blob/main/LICENSE", "pipeline_tag": "text-generation", "quantized_by": "bartowski"} | bartowski/CodeQwen1.5-7B-Chat-GGUF | null | [
"gguf",
"chat",
"text-generation",
"en",
"license:other",
"region:us"
] | null | 2024-04-17T03:28:34+00:00 | [] | [
"en"
] | TAGS
#gguf #chat #text-generation #en #license-other #region-us
| Llamacpp Quantizations of CodeQwen1.5-7B-Chat
---------------------------------------------
Using <a href="URL PR <a href="URL for quantization.
Original model: URL
All quants made using imatrix option with dataset provided by Kalomaze here
Prompt format
-------------
Download a file (not the whole branch) from below:
--------------------------------------------------
Which file should I choose?
---------------------------
A great write up with charts showing various performances is provided by Artefact2 here
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX\_K\_X', like Q5\_K\_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
URL feature matrix
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX\_X, like IQ3\_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: URL
| [] | [
"TAGS\n#gguf #chat #text-generation #en #license-other #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3-seqsight_65536_512_47M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1778
- F1 Score: 0.7101
- Accuracy: 0.7101
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6193 | 33.33 | 200 | 0.6140 | 0.6798 | 0.6800 |
| 0.4917 | 66.67 | 400 | 0.6637 | 0.6767 | 0.6767 |
| 0.4223 | 100.0 | 600 | 0.7294 | 0.6661 | 0.6667 |
| 0.375 | 133.33 | 800 | 0.7597 | 0.6725 | 0.6727 |
| 0.3486 | 166.67 | 1000 | 0.7553 | 0.6767 | 0.6767 |
| 0.3312 | 200.0 | 1200 | 0.7957 | 0.6780 | 0.6780 |
| 0.3149 | 233.33 | 1400 | 0.8104 | 0.6864 | 0.6874 |
| 0.2992 | 266.67 | 1600 | 0.8862 | 0.6800 | 0.6800 |
| 0.2846 | 300.0 | 1800 | 0.9101 | 0.6755 | 0.6760 |
| 0.2711 | 333.33 | 2000 | 0.8681 | 0.6819 | 0.6820 |
| 0.2579 | 366.67 | 2200 | 0.9457 | 0.6837 | 0.6840 |
| 0.2471 | 400.0 | 2400 | 0.9000 | 0.6865 | 0.6867 |
| 0.2381 | 433.33 | 2600 | 0.9523 | 0.6853 | 0.6854 |
| 0.2272 | 466.67 | 2800 | 0.9455 | 0.6907 | 0.6907 |
| 0.2173 | 500.0 | 3000 | 0.9313 | 0.6953 | 0.6954 |
| 0.2071 | 533.33 | 3200 | 0.9904 | 0.6947 | 0.6947 |
| 0.1974 | 566.67 | 3400 | 0.9905 | 0.6967 | 0.6967 |
| 0.1903 | 600.0 | 3600 | 1.0286 | 0.6894 | 0.6894 |
| 0.1806 | 633.33 | 3800 | 1.0613 | 0.6906 | 0.6907 |
| 0.1728 | 666.67 | 4000 | 1.0811 | 0.6947 | 0.6947 |
| 0.1676 | 700.0 | 4200 | 1.0990 | 0.7007 | 0.7007 |
| 0.1603 | 733.33 | 4400 | 1.1473 | 0.6961 | 0.6961 |
| 0.1541 | 766.67 | 4600 | 1.1673 | 0.7003 | 0.7007 |
| 0.1502 | 800.0 | 4800 | 1.1601 | 0.6910 | 0.6914 |
| 0.1453 | 833.33 | 5000 | 1.1174 | 0.6947 | 0.6947 |
| 0.1395 | 866.67 | 5200 | 1.1713 | 0.7001 | 0.7001 |
| 0.1361 | 900.0 | 5400 | 1.2269 | 0.6967 | 0.6967 |
| 0.131 | 933.33 | 5600 | 1.1908 | 0.6947 | 0.6947 |
| 0.1287 | 966.67 | 5800 | 1.1921 | 0.6968 | 0.6967 |
| 0.125 | 1000.0 | 6000 | 1.1799 | 0.6947 | 0.6947 |
| 0.1204 | 1033.33 | 6200 | 1.1874 | 0.6954 | 0.6954 |
| 0.1183 | 1066.67 | 6400 | 1.2756 | 0.6987 | 0.6987 |
| 0.1159 | 1100.0 | 6600 | 1.2427 | 0.6994 | 0.6994 |
| 0.1146 | 1133.33 | 6800 | 1.2666 | 0.6994 | 0.6994 |
| 0.1118 | 1166.67 | 7000 | 1.2582 | 0.7007 | 0.7007 |
| 0.1096 | 1200.0 | 7200 | 1.2400 | 0.7041 | 0.7041 |
| 0.1073 | 1233.33 | 7400 | 1.2841 | 0.7081 | 0.7081 |
| 0.1068 | 1266.67 | 7600 | 1.2657 | 0.7028 | 0.7027 |
| 0.1049 | 1300.0 | 7800 | 1.2802 | 0.7007 | 0.7007 |
| 0.1021 | 1333.33 | 8000 | 1.2890 | 0.6967 | 0.6967 |
| 0.1007 | 1366.67 | 8200 | 1.2750 | 0.7041 | 0.7041 |
| 0.0994 | 1400.0 | 8400 | 1.2710 | 0.7047 | 0.7047 |
| 0.0988 | 1433.33 | 8600 | 1.2899 | 0.7081 | 0.7081 |
| 0.0971 | 1466.67 | 8800 | 1.2848 | 0.7014 | 0.7014 |
| 0.0974 | 1500.0 | 9000 | 1.2695 | 0.7001 | 0.7001 |
| 0.0953 | 1533.33 | 9200 | 1.3040 | 0.7021 | 0.7021 |
| 0.0963 | 1566.67 | 9400 | 1.3123 | 0.7054 | 0.7054 |
| 0.0946 | 1600.0 | 9600 | 1.2959 | 0.7008 | 0.7007 |
| 0.0947 | 1633.33 | 9800 | 1.3086 | 0.7061 | 0.7061 |
| 0.0948 | 1666.67 | 10000 | 1.3010 | 0.7034 | 0.7034 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3-seqsight_65536_512_47M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3-seqsight_65536_512_47M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-04-17T03:28:35+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H3-seqsight\_65536\_512\_47M-L32\_all
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H3 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1778
* F1 Score: 0.7101
* Accuracy: 0.7101
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4-seqsight_65536_512_47M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9009
- F1 Score: 0.7288
- Accuracy: 0.7296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6004 | 33.33 | 200 | 0.5918 | 0.6971 | 0.7036 |
| 0.4689 | 66.67 | 400 | 0.6275 | 0.6999 | 0.7009 |
| 0.4049 | 100.0 | 600 | 0.6454 | 0.7097 | 0.7091 |
| 0.3609 | 133.33 | 800 | 0.6548 | 0.7217 | 0.7214 |
| 0.3367 | 166.67 | 1000 | 0.6638 | 0.7320 | 0.7317 |
| 0.3193 | 200.0 | 1200 | 0.6910 | 0.7322 | 0.7358 |
| 0.3064 | 233.33 | 1400 | 0.6906 | 0.7302 | 0.7296 |
| 0.2926 | 266.67 | 1600 | 0.7215 | 0.7220 | 0.7242 |
| 0.2807 | 300.0 | 1800 | 0.7537 | 0.7241 | 0.7248 |
| 0.2684 | 333.33 | 2000 | 0.7420 | 0.7304 | 0.7303 |
| 0.2578 | 366.67 | 2200 | 0.7572 | 0.7275 | 0.7269 |
| 0.2461 | 400.0 | 2400 | 0.8048 | 0.7315 | 0.7317 |
| 0.2353 | 433.33 | 2600 | 0.7902 | 0.7282 | 0.7296 |
| 0.2247 | 466.67 | 2800 | 0.8239 | 0.7309 | 0.7317 |
| 0.2143 | 500.0 | 3000 | 0.8040 | 0.7279 | 0.7283 |
| 0.2072 | 533.33 | 3200 | 0.8647 | 0.7362 | 0.7372 |
| 0.1999 | 566.67 | 3400 | 0.8706 | 0.7318 | 0.7324 |
| 0.1913 | 600.0 | 3600 | 0.8544 | 0.7223 | 0.7228 |
| 0.1846 | 633.33 | 3800 | 0.8859 | 0.7290 | 0.7296 |
| 0.1771 | 666.67 | 4000 | 0.9072 | 0.7208 | 0.7207 |
| 0.1692 | 700.0 | 4200 | 0.9304 | 0.7252 | 0.7262 |
| 0.1636 | 733.33 | 4400 | 0.9465 | 0.7258 | 0.7269 |
| 0.1575 | 766.67 | 4600 | 0.9440 | 0.7262 | 0.7262 |
| 0.1533 | 800.0 | 4800 | 0.9363 | 0.7213 | 0.7242 |
| 0.1467 | 833.33 | 5000 | 0.9269 | 0.7182 | 0.7187 |
| 0.1434 | 866.67 | 5200 | 0.9126 | 0.7156 | 0.7166 |
| 0.1378 | 900.0 | 5400 | 0.9863 | 0.7282 | 0.7290 |
| 0.1365 | 933.33 | 5600 | 0.9797 | 0.7267 | 0.7283 |
| 0.1324 | 966.67 | 5800 | 0.9849 | 0.7278 | 0.7283 |
| 0.1283 | 1000.0 | 6000 | 1.0046 | 0.7264 | 0.7276 |
| 0.1246 | 1033.33 | 6200 | 0.9894 | 0.7241 | 0.7242 |
| 0.1211 | 1066.67 | 6400 | 1.0089 | 0.7245 | 0.7262 |
| 0.1198 | 1100.0 | 6600 | 1.0040 | 0.7225 | 0.7228 |
| 0.1169 | 1133.33 | 6800 | 1.0021 | 0.7249 | 0.7255 |
| 0.1145 | 1166.67 | 7000 | 1.0293 | 0.7323 | 0.7337 |
| 0.1122 | 1200.0 | 7200 | 1.0010 | 0.7323 | 0.7324 |
| 0.1112 | 1233.33 | 7400 | 1.0087 | 0.7275 | 0.7276 |
| 0.1088 | 1266.67 | 7600 | 0.9907 | 0.7291 | 0.7296 |
| 0.1076 | 1300.0 | 7800 | 1.0307 | 0.7276 | 0.7283 |
| 0.106 | 1333.33 | 8000 | 1.0398 | 0.7318 | 0.7317 |
| 0.1035 | 1366.67 | 8200 | 1.0240 | 0.7238 | 0.7248 |
| 0.1021 | 1400.0 | 8400 | 1.0345 | 0.7302 | 0.7303 |
| 0.1026 | 1433.33 | 8600 | 1.0392 | 0.7300 | 0.7303 |
| 0.1012 | 1466.67 | 8800 | 1.0445 | 0.7314 | 0.7324 |
| 0.099 | 1500.0 | 9000 | 1.0577 | 0.7346 | 0.7351 |
| 0.0988 | 1533.33 | 9200 | 1.0422 | 0.7314 | 0.7317 |
| 0.0978 | 1566.67 | 9400 | 1.0469 | 0.7285 | 0.7290 |
| 0.0984 | 1600.0 | 9600 | 1.0278 | 0.7313 | 0.7317 |
| 0.0971 | 1633.33 | 9800 | 1.0458 | 0.7286 | 0.7290 |
| 0.0974 | 1666.67 | 10000 | 1.0454 | 0.7278 | 0.7283 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H4-seqsight_65536_512_47M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4-seqsight_65536_512_47M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-04-17T03:28:35+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H4-seqsight\_65536\_512\_47M-L32\_all
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9009
* F1 Score: 0.7288
* Accuracy: 0.7296
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# token-classification-llmlingua2-xlm-roberta-bctn-1178_sample-5_epoch_best_data
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2263
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.99 | 73 | 0.2410 |
| No log | 2.0 | 147 | 0.2330 |
| No log | 2.99 | 220 | 0.2292 |
| No log | 3.99 | 294 | 0.2270 |
| No log | 4.96 | 365 | 0.2263 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.1+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "FacebookAI/xlm-roberta-large", "model-index": [{"name": "token-classification-llmlingua2-xlm-roberta-bctn-1178_sample-5_epoch_best_data", "results": []}]} | qminh369/token-classification-llmlingua2-xlm-roberta-bctn-1178_sample-5_epoch_best_data | null | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:29:23+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-FacebookAI/xlm-roberta-large #license-mit #autotrain_compatible #endpoints_compatible #region-us
| token-classification-llmlingua2-xlm-roberta-bctn-1178\_sample-5\_epoch\_best\_data
==================================================================================
This model is a fine-tuned version of FacebookAI/xlm-roberta-large on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2263
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.39.0.dev0
* Pytorch 2.2.1+cu118
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.2.1+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-FacebookAI/xlm-roberta-large #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.2.1+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Ankurbash/Ankur_llm
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Ankur_llm-GGUF/resolve/main/Ankur_llm.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "library_name": "transformers", "base_model": "Ankurbash/Ankur_llm", "quantized_by": "mradermacher"} | mradermacher/Ankur_llm-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:Ankurbash/Ankur_llm",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:29:44+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-Ankurbash/Ankur_llm #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-Ankurbash/Ankur_llm #endpoints_compatible #region-us \n"
] |
text-generation | null |
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/65f4605f4c2a1312c4d0a4b2/rPUhxgAMZGNqDh4dF5ji3.webp" style="width: 60%; border-radius: 10px;">
</p>
# Gua'a
*En la mitología guarani: El padre de la sabiduria usaba un gua'a o loro para intentar comunicarse con su dios supremo Tupã. Haciendo la misma analogía creamos el modelo "gua-a" para difundir la cultura guarani a todos los hispanohablantes.*
| {"language": ["es"], "license": "cc-by-sa-4.0", "tags": ["Paraguay", "Culture", "Custom Code", "Guaran\u00ed", "unsloth"], "datasets": ["somosnlp/dataset-cultura-guarani_corpus-it"], "pipeline_tag": "text-generation"} | thinkPy/gua-a_ft-v0.1_mistral-7b_GGUF | null | [
"gguf",
"Paraguay",
"Culture",
"Custom Code",
"Guaraní",
"unsloth",
"text-generation",
"es",
"dataset:somosnlp/dataset-cultura-guarani_corpus-it",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2024-04-17T03:30:36+00:00 | [] | [
"es"
] | TAGS
#gguf #Paraguay #Culture #Custom Code #Guaraní #unsloth #text-generation #es #dataset-somosnlp/dataset-cultura-guarani_corpus-it #license-cc-by-sa-4.0 #region-us
|
<p align="center">
<img src="URL style="width: 60%; border-radius: 10px;">
</p>
# Gua'a
*En la mitología guarani: El padre de la sabiduria usaba un gua'a o loro para intentar comunicarse con su dios supremo Tupã. Haciendo la misma analogía creamos el modelo "gua-a" para difundir la cultura guarani a todos los hispanohablantes.*
| [
"# Gua'a\n\n*En la mitología guarani: El padre de la sabiduria usaba un gua'a o loro para intentar comunicarse con su dios supremo Tupã. Haciendo la misma analogía creamos el modelo \"gua-a\" para difundir la cultura guarani a todos los hispanohablantes.*"
] | [
"TAGS\n#gguf #Paraguay #Culture #Custom Code #Guaraní #unsloth #text-generation #es #dataset-somosnlp/dataset-cultura-guarani_corpus-it #license-cc-by-sa-4.0 #region-us \n",
"# Gua'a\n\n*En la mitología guarani: El padre de la sabiduria usaba un gua'a o loro para intentar comunicarse con su dios supremo Tupã. Haciendo la misma analogía creamos el modelo \"gua-a\" para difundir la cultura guarani a todos los hispanohablantes.*"
] |
null | peft | ## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
| {"library_name": "peft"} | joshyii/llama2-summary | null | [
"peft",
"region:us"
] | null | 2024-04-17T03:31:41+00:00 | [] | [] | TAGS
#peft #region-us
| ## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following 'bitsandbytes' quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
| [
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n- PEFT 0.4.0\n\n- PEFT 0.4.0"
] | [
"TAGS\n#peft #region-us \n",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n- PEFT 0.4.0\n\n- PEFT 0.4.0"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "THUDM/chatglm2-6b"} | XiaoFang1019/chatglm2-6b_298_v2 | null | [
"peft",
"arxiv:1910.09700",
"base_model:THUDM/chatglm2-6b",
"region:us"
] | null | 2024-04-17T03:32:04+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #arxiv-1910.09700 #base_model-THUDM/chatglm2-6b #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #arxiv-1910.09700 #base_model-THUDM/chatglm2-6b #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
null | null |
# DavidAU/LDCC-SOLAR-10.7B-Q6_K-GGUF
This model was converted to GGUF format from [`LDCC/LDCC-SOLAR-10.7B`](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/LDCC-SOLAR-10.7B-Q6_K-GGUF --model ldcc-solar-10.7b.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/LDCC-SOLAR-10.7B-Q6_K-GGUF --model ldcc-solar-10.7b.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m ldcc-solar-10.7b.Q6_K.gguf -n 128
```
| {"language": ["ko"], "license": "cc-by-nc-4.0", "tags": ["llama-cpp", "gguf-my-repo"]} | DavidAU/LDCC-SOLAR-10.7B-Q6_K-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"ko",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-04-17T03:32:05+00:00 | [] | [
"ko"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #ko #license-cc-by-nc-4.0 #region-us
|
# DavidAU/LDCC-SOLAR-10.7B-Q6_K-GGUF
This model was converted to GGUF format from 'LDCC/LDCC-SOLAR-10.7B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/LDCC-SOLAR-10.7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'LDCC/LDCC-SOLAR-10.7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #ko #license-cc-by-nc-4.0 #region-us \n",
"# DavidAU/LDCC-SOLAR-10.7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'LDCC/LDCC-SOLAR-10.7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | null |
# DavidAU/Sensualize-Solar-10.7B-Q6_K-GGUF
This model was converted to GGUF format from [`Sao10K/Sensualize-Solar-10.7B`](https://huggingface.co/Sao10K/Sensualize-Solar-10.7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Sao10K/Sensualize-Solar-10.7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Sensualize-Solar-10.7B-Q6_K-GGUF --model sensualize-solar-10.7b.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Sensualize-Solar-10.7B-Q6_K-GGUF --model sensualize-solar-10.7b.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m sensualize-solar-10.7b.Q6_K.gguf -n 128
```
| {"language": ["en"], "license": "cc-by-nc-4.0", "tags": ["llama-cpp", "gguf-my-repo"], "base_model": ["upstage/SOLAR-10.7B-v1.0"]} | DavidAU/Sensualize-Solar-10.7B-Q6_K-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:upstage/SOLAR-10.7B-v1.0",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-04-17T03:33:51+00:00 | [] | [
"en"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #en #base_model-upstage/SOLAR-10.7B-v1.0 #license-cc-by-nc-4.0 #region-us
|
# DavidAU/Sensualize-Solar-10.7B-Q6_K-GGUF
This model was converted to GGUF format from 'Sao10K/Sensualize-Solar-10.7B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Sensualize-Solar-10.7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'Sao10K/Sensualize-Solar-10.7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #en #base_model-upstage/SOLAR-10.7B-v1.0 #license-cc-by-nc-4.0 #region-us \n",
"# DavidAU/Sensualize-Solar-10.7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'Sao10K/Sensualize-Solar-10.7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4ac-seqsight_65536_512_47M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8201
- F1 Score: 0.5710
- Accuracy: 0.5718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6805 | 14.29 | 200 | 0.6853 | 0.5689 | 0.5710 |
| 0.6395 | 28.57 | 400 | 0.6995 | 0.5691 | 0.5686 |
| 0.6159 | 42.86 | 600 | 0.7143 | 0.5664 | 0.5660 |
| 0.5949 | 57.14 | 800 | 0.7225 | 0.5648 | 0.5654 |
| 0.5785 | 71.43 | 1000 | 0.7283 | 0.5662 | 0.5663 |
| 0.5681 | 85.71 | 1200 | 0.7326 | 0.5562 | 0.5589 |
| 0.5597 | 100.0 | 1400 | 0.7360 | 0.5689 | 0.5686 |
| 0.5547 | 114.29 | 1600 | 0.7403 | 0.5672 | 0.5680 |
| 0.5494 | 128.57 | 1800 | 0.7393 | 0.5718 | 0.5713 |
| 0.5446 | 142.86 | 2000 | 0.7412 | 0.5749 | 0.5748 |
| 0.5397 | 157.14 | 2200 | 0.7314 | 0.5750 | 0.5786 |
| 0.5375 | 171.43 | 2400 | 0.7367 | 0.5735 | 0.5736 |
| 0.5325 | 185.71 | 2600 | 0.7544 | 0.5751 | 0.5789 |
| 0.53 | 200.0 | 2800 | 0.7400 | 0.5754 | 0.5771 |
| 0.5263 | 214.29 | 3000 | 0.7604 | 0.5752 | 0.5754 |
| 0.523 | 228.57 | 3200 | 0.7603 | 0.5775 | 0.5783 |
| 0.5183 | 242.86 | 3400 | 0.7549 | 0.5794 | 0.5792 |
| 0.5149 | 257.14 | 3600 | 0.7430 | 0.5734 | 0.5730 |
| 0.5101 | 271.43 | 3800 | 0.7624 | 0.5749 | 0.5754 |
| 0.5068 | 285.71 | 4000 | 0.7612 | 0.5754 | 0.5754 |
| 0.5025 | 300.0 | 4200 | 0.7625 | 0.5775 | 0.5774 |
| 0.4987 | 314.29 | 4400 | 0.7628 | 0.5760 | 0.5757 |
| 0.4935 | 328.57 | 4600 | 0.7906 | 0.5749 | 0.5795 |
| 0.4896 | 342.86 | 4800 | 0.7928 | 0.5793 | 0.5812 |
| 0.4854 | 357.14 | 5000 | 0.7995 | 0.5792 | 0.5806 |
| 0.4819 | 371.43 | 5200 | 0.7655 | 0.5741 | 0.5736 |
| 0.4764 | 385.71 | 5400 | 0.8003 | 0.5749 | 0.5745 |
| 0.473 | 400.0 | 5600 | 0.7854 | 0.5795 | 0.5815 |
| 0.4686 | 414.29 | 5800 | 0.8072 | 0.5783 | 0.5780 |
| 0.4643 | 428.57 | 6000 | 0.8164 | 0.5771 | 0.5801 |
| 0.4638 | 442.86 | 6200 | 0.7924 | 0.5767 | 0.5812 |
| 0.4582 | 457.14 | 6400 | 0.8014 | 0.5768 | 0.5771 |
| 0.4539 | 471.43 | 6600 | 0.8059 | 0.5831 | 0.5848 |
| 0.4509 | 485.71 | 6800 | 0.8146 | 0.5777 | 0.5780 |
| 0.4479 | 500.0 | 7000 | 0.8200 | 0.5816 | 0.5830 |
| 0.4431 | 514.29 | 7200 | 0.8061 | 0.5808 | 0.5809 |
| 0.442 | 528.57 | 7400 | 0.8272 | 0.5796 | 0.5801 |
| 0.4394 | 542.86 | 7600 | 0.8340 | 0.5743 | 0.5745 |
| 0.4382 | 557.14 | 7800 | 0.8198 | 0.5811 | 0.5812 |
| 0.4352 | 571.43 | 8000 | 0.8341 | 0.5752 | 0.5748 |
| 0.434 | 585.71 | 8200 | 0.8357 | 0.5783 | 0.5789 |
| 0.4307 | 600.0 | 8400 | 0.8420 | 0.5789 | 0.5792 |
| 0.4301 | 614.29 | 8600 | 0.8443 | 0.5775 | 0.5774 |
| 0.4286 | 628.57 | 8800 | 0.8396 | 0.5797 | 0.5801 |
| 0.427 | 642.86 | 9000 | 0.8509 | 0.5781 | 0.5786 |
| 0.4256 | 657.14 | 9200 | 0.8464 | 0.5785 | 0.5792 |
| 0.4259 | 671.43 | 9400 | 0.8405 | 0.5776 | 0.5783 |
| 0.4237 | 685.71 | 9600 | 0.8473 | 0.5774 | 0.5777 |
| 0.4231 | 700.0 | 9800 | 0.8457 | 0.5758 | 0.5762 |
| 0.4243 | 714.29 | 10000 | 0.8451 | 0.5767 | 0.5771 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_65536_512_47M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_65536_512_47M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-04-17T03:34:09+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H4ac-seqsight\_65536\_512\_47M-L32\_all
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H4ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8201
* F1 Score: 0.5710
* Accuracy: 0.5718
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# StrangeMerges_57-7B-model_stock
StrangeMerges_57-7B-model_stock is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
## 🧩 Configuration
```yaml
models:
- model: Kukedlc/NeuralMaths-Experiment-7b
- model: Kukedlc/NeuralSynthesis-7B-v0.1
- model: automerger/YamshadowExperiment28-7B
- model: amazingvince/Not-WizardLM-2-7B
merge_method: model_stock
base_model: amazingvince/Not-WizardLM-2-7B
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_57-7B-model_stock"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit"]} | Gille/StrangeMerges_57-7B-model_stock | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T03:34:15+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# StrangeMerges_57-7B-model_stock
StrangeMerges_57-7B-model_stock is a merge of the following models using LazyMergekit:
## Configuration
## Usage
| [
"# StrangeMerges_57-7B-model_stock\n\nStrangeMerges_57-7B-model_stock is a merge of the following models using LazyMergekit:",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# StrangeMerges_57-7B-model_stock\n\nStrangeMerges_57-7B-model_stock is a merge of the following models using LazyMergekit:",
"## Configuration",
"## Usage"
] |
null | transformers |
# Uploaded model
- **Developed by:** codesagar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | codesagar/prompt-guard-reasoning-v11 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:35:12+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: codesagar
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | transformers |
# DavidAU/nox-solar-10.7b-v4-Q6_K-GGUF
This model was converted to GGUF format from [`davidkim205/nox-solar-10.7b-v4`](https://huggingface.co/davidkim205/nox-solar-10.7b-v4) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/davidkim205/nox-solar-10.7b-v4) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/nox-solar-10.7b-v4-Q6_K-GGUF --model nox-solar-10.7b-v4.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/nox-solar-10.7b-v4-Q6_K-GGUF --model nox-solar-10.7b-v4.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m nox-solar-10.7b-v4.Q6_K.gguf -n 128
```
| {"license": "apache-2.0", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"]} | DavidAU/nox-solar-10.7b-v4-Q6_K-GGUF | null | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:35:26+00:00 | [] | [] | TAGS
#transformers #gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #endpoints_compatible #region-us
|
# DavidAU/nox-solar-10.7b-v4-Q6_K-GGUF
This model was converted to GGUF format from 'davidkim205/nox-solar-10.7b-v4' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/nox-solar-10.7b-v4-Q6_K-GGUF\nThis model was converted to GGUF format from 'davidkim205/nox-solar-10.7b-v4' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #endpoints_compatible #region-us \n",
"# DavidAU/nox-solar-10.7b-v4-Q6_K-GGUF\nThis model was converted to GGUF format from 'davidkim205/nox-solar-10.7b-v4' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers |
# DavidAU/OpenChat-3.5-7B-Solar-Q6_K-GGUF
This model was converted to GGUF format from [`FuseAI/OpenChat-3.5-7B-Solar`](https://huggingface.co/FuseAI/OpenChat-3.5-7B-Solar) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/FuseAI/OpenChat-3.5-7B-Solar) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/OpenChat-3.5-7B-Solar-Q6_K-GGUF --model openchat-3.5-7b-solar.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/OpenChat-3.5-7B-Solar-Q6_K-GGUF --model openchat-3.5-7b-solar.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m openchat-3.5-7b-solar.Q6_K.gguf -n 128
```
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["mistral", "mixtral", "solar", "model-fusion", "fusechat", "llama-cpp", "gguf-my-repo"], "datasets": ["FuseAI/FuseChat-Mixture"], "base_model": "openchat/openchat_3.5", "pipeline_tag": "text-generation", "model-index": [{"name": "OpenChat-3.5-7B-Solar", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MT-Bench", "type": "unknown"}, "metrics": [{"type": "unknown", "value": 8.18, "name": "score"}], "source": {"url": "https://huggingface.co/spaces/lmsys/mt-bench"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 62.97, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/OpenChat-3.5-7B-Solar", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 84.19, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/OpenChat-3.5-7B-Solar", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 63.94, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/OpenChat-3.5-7B-Solar", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 45.65}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/OpenChat-3.5-7B-Solar", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 79.48, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/OpenChat-3.5-7B-Solar", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 62.55, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/OpenChat-3.5-7B-Solar", "name": "Open LLM Leaderboard"}}]}]} | DavidAU/OpenChat-3.5-7B-Solar-Q6_K-GGUF | null | [
"transformers",
"gguf",
"mistral",
"mixtral",
"solar",
"model-fusion",
"fusechat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:FuseAI/FuseChat-Mixture",
"base_model:openchat/openchat_3.5",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:36:33+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #mistral #mixtral #solar #model-fusion #fusechat #llama-cpp #gguf-my-repo #text-generation #en #dataset-FuseAI/FuseChat-Mixture #base_model-openchat/openchat_3.5 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# DavidAU/OpenChat-3.5-7B-Solar-Q6_K-GGUF
This model was converted to GGUF format from 'FuseAI/OpenChat-3.5-7B-Solar' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/OpenChat-3.5-7B-Solar-Q6_K-GGUF\nThis model was converted to GGUF format from 'FuseAI/OpenChat-3.5-7B-Solar' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #mistral #mixtral #solar #model-fusion #fusechat #llama-cpp #gguf-my-repo #text-generation #en #dataset-FuseAI/FuseChat-Mixture #base_model-openchat/openchat_3.5 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# DavidAU/OpenChat-3.5-7B-Solar-Q6_K-GGUF\nThis model was converted to GGUF format from 'FuseAI/OpenChat-3.5-7B-Solar' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | aguglaniAI/gemma_fine_tune_istambul_rugs | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T03:36:49+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Yi 34B Chat RMU
Yi 34B Chat model with hazardous knowledge about biosecurity and cybersecurity "unlearned" using Representation Misdirection for Unlearning (RMU). For more details, please check [our paper](https://arxiv.org/abs/2403.03218).
## Model sources
- Base model: [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat)
- Repository: [https://github.com/centerforaisafety/wmdp](https://github.com/centerforaisafety/wmdp)
- Website: [https://www.wmdp.ai/](https://www.wmdp.ai/)
- Corpora used for unlearning: [https://huggingface.co/datasets/cais/wmdp-corpora](https://huggingface.co/datasets/cais/wmdp-corpora)
## Performance
Yi 34B Chat RMU has been evaluated on [WMDP](https://huggingface.co/datasets/cais/wmdp), [MMLU](https://huggingface.co/datasets/cais/mmlu) and [MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench). Higher accuracy on MMLU and MT-Bench, and lower accuracy on WMDP are preferred.
| | WMDP-Bio | WMDP-Cyber | MMLU | MT-Bench |
|-----------------|:---------:|:----------:|:------:|:--------:|
| Yi 34B Chat | 75.3 | 49.7 | 72.6 | 7.65 |
| Yi 34B Chat RMU | 30.7 | 29.0 | 70.6 | 7.59 |
## Citation
If you find this useful in your research, please consider citing our paper:
```
@misc{li2024wmdp,
title={The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning},
author={Nathaniel Li and Alexander Pan and Anjali Gopal and Summer Yue and Daniel Berrios and Alice Gatti and Justin D. Li and Ann-Kathrin Dombrowski and Shashwat Goel and Long Phan and Gabriel Mukobi and Nathan Helm-Burger and Rassin Lababidi and Lennart Justen and Andrew B. Liu and Michael Chen and Isabelle Barrass and Oliver Zhang and Xiaoyuan Zhu and Rishub Tamirisa and Bhrugu Bharathi and Adam Khoja and Zhenqi Zhao and Ariel Herbert-Voss and Cort B. Breuer and Sam Marks and Oam Patel and Andy Zou and Mantas Mazeika and Zifan Wang and Palash Oswal and Weiran Liu and Adam A. Hunt and Justin Tienken-Harder and Kevin Y. Shih and Kemper Talley and John Guan and Russell Kaplan and Ian Steneker and David Campbell and Brad Jokubaitis and Alex Levinson and Jean Wang and William Qian and Kallol Krishna Karmakar and Steven Basart and Stephen Fitz and Mindy Levine and Ponnurangam Kumaraguru and Uday Tupakula and Vijay Varadharajan and Yan Shoshitaishvili and Jimmy Ba and Kevin M. Esvelt and Alexandr Wang and Dan Hendrycks},
year={2024},
eprint={2403.03218},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` | {"language": ["en"], "license": "mit", "library_name": "transformers", "datasets": ["cais/wmdp", "cais/wmdp-corpora"], "pipeline_tag": "text-generation", "arxiv": ["arxiv.org/abs/2403.03218"]} | cais/Yi-34B-Chat_RMU | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:cais/wmdp",
"dataset:cais/wmdp-corpora",
"arxiv:2403.03218",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T03:37:11+00:00 | [
"2403.03218"
] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #conversational #en #dataset-cais/wmdp #dataset-cais/wmdp-corpora #arxiv-2403.03218 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Yi 34B Chat RMU
===============
Yi 34B Chat model with hazardous knowledge about biosecurity and cybersecurity "unlearned" using Representation Misdirection for Unlearning (RMU). For more details, please check our paper.
Model sources
-------------
* Base model: Yi-34B-Chat
* Repository: URL
* Website: URL
* Corpora used for unlearning: URL
Performance
-----------
Yi 34B Chat RMU has been evaluated on WMDP, MMLU and MT-Bench. Higher accuracy on MMLU and MT-Bench, and lower accuracy on WMDP are preferred.
If you find this useful in your research, please consider citing our paper:
| [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #en #dataset-cais/wmdp #dataset-cais/wmdp-corpora #arxiv-2403.03218 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505-Dev-CSI-PhoBERT_base_v2
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["generated_from_trainer"], "base_model": "vinai/phobert-base-v2", "model-index": [{"name": "CS505-Dev-CSI-PhoBERT_base_v2", "results": []}]} | ThuyNT/CS505-Dev-CSI-PhoBERT_base_v2 | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:38:03+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-vinai/phobert-base-v2 #autotrain_compatible #endpoints_compatible #region-us
|
# CS505-Dev-CSI-PhoBERT_base_v2
This model is a fine-tuned version of vinai/phobert-base-v2 on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| [
"# CS505-Dev-CSI-PhoBERT_base_v2\n\nThis model is a fine-tuned version of vinai/phobert-base-v2 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 15",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-vinai/phobert-base-v2 #autotrain_compatible #endpoints_compatible #region-us \n",
"# CS505-Dev-CSI-PhoBERT_base_v2\n\nThis model is a fine-tuned version of vinai/phobert-base-v2 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 15",
"### Training results",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-to-speech | transformers | # Model Card for taiwanese-hakka-tts-sixian-1f-240417
<!-- Provide a quick summary of what the model is/does. -->
Experimental modeling to find out if some words are poorly generated.
Example: 同學, 北部, 屋下, 看得到
## Model Details
- Pure Vits :)
- Only use sixian female data.
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [coqui-ai/TTS](https://github.com/coqui-ai/TTS)
- **Demo:** [Hugging Face Space](https://huggingface.co/spaces/formospeech/taiwanese-hakka-tts)
| {"language": ["hak"], "license": "mit", "pipeline_tag": "text-to-speech"} | formospeech/taiwanese-hakka-tts-sixian-1f-240417 | null | [
"transformers",
"text-to-speech",
"hak",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:38:25+00:00 | [] | [
"hak"
] | TAGS
#transformers #text-to-speech #hak #license-mit #endpoints_compatible #region-us
| # Model Card for taiwanese-hakka-tts-sixian-1f-240417
Experimental modeling to find out if some words are poorly generated.
Example: 同學, 北部, 屋下, 看得到
## Model Details
- Pure Vits :)
- Only use sixian female data.
### Model Sources
- Repository: coqui-ai/TTS
- Demo: Hugging Face Space
| [
"# Model Card for taiwanese-hakka-tts-sixian-1f-240417\n\n\n\nExperimental modeling to find out if some words are poorly generated.\nExample: 同學, 北部, 屋下, 看得到",
"## Model Details\n\n- Pure Vits :)\n- Only use sixian female data.",
"### Model Sources\n\n\n\n- Repository: coqui-ai/TTS\n- Demo: Hugging Face Space"
] | [
"TAGS\n#transformers #text-to-speech #hak #license-mit #endpoints_compatible #region-us \n",
"# Model Card for taiwanese-hakka-tts-sixian-1f-240417\n\n\n\nExperimental modeling to find out if some words are poorly generated.\nExample: 同學, 北部, 屋下, 看得到",
"## Model Details\n\n- Pure Vits :)\n- Only use sixian female data.",
"### Model Sources\n\n\n\n- Repository: coqui-ai/TTS\n- Demo: Hugging Face Space"
] |
null | null |
# DavidAU/SolarMaid-v0.1.1-Q6_K-GGUF
This model was converted to GGUF format from [`Undi95/SolarMaid-v0.1.1`](https://huggingface.co/Undi95/SolarMaid-v0.1.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Undi95/SolarMaid-v0.1.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/SolarMaid-v0.1.1-Q6_K-GGUF --model solarmaid-v0.1.1.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/SolarMaid-v0.1.1-Q6_K-GGUF --model solarmaid-v0.1.1.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m solarmaid-v0.1.1.Q6_K.gguf -n 128
```
| {"license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences", "nsfw", "llama-cpp", "gguf-my-repo"]} | DavidAU/SolarMaid-v0.1.1-Q6_K-GGUF | null | [
"gguf",
"not-for-all-audiences",
"nsfw",
"llama-cpp",
"gguf-my-repo",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-04-17T03:39:07+00:00 | [] | [] | TAGS
#gguf #not-for-all-audiences #nsfw #llama-cpp #gguf-my-repo #license-cc-by-nc-4.0 #region-us
|
# DavidAU/SolarMaid-v0.1.1-Q6_K-GGUF
This model was converted to GGUF format from 'Undi95/SolarMaid-v0.1.1' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/SolarMaid-v0.1.1-Q6_K-GGUF\nThis model was converted to GGUF format from 'Undi95/SolarMaid-v0.1.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #not-for-all-audiences #nsfw #llama-cpp #gguf-my-repo #license-cc-by-nc-4.0 #region-us \n",
"# DavidAU/SolarMaid-v0.1.1-Q6_K-GGUF\nThis model was converted to GGUF format from 'Undi95/SolarMaid-v0.1.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers |
# Spaetzle-v69-7b
This is a progressive (mostly dare-ties, but also slerp) merge with the intention of a suitable compromise for English and German local tasks.
There is also a 4q_k_m quantized [GGUF](https://huggingface.co/cstr/Spaetzle-v69-7b-GGUF).
It should work sufficiently well with ChatML prompt template (for all merged models should have seen ChatML prompts at least in DPO stage).
## Evaluation
Benchmark scores are not the possible optimum, as the model attempts a compromise with a number of parameters, like German language performance, instruction following, reasoning capabilities, robustness (so far, i did not encounter inserted tokens, e.g.), model licensing, and other criteria.
Nevertheless, they are not too bad:
It achieves (running quantized) in
- German EQ Bench: Score (v2_de): 62.59 (Parseable: 171.0).
- English EQ Bench: Score (v2): 76.43 (Parseable: 171.0).
[Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard):
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_cstr__Spaetzle-v69-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |72.87|
|AI2 Reasoning Challenge (25-Shot)|69.54|
|HellaSwag (10-Shot) |86.77|
|MMLU (5-Shot) |64.63|
|TruthfulQA (0-shot) |65.61|
|Winogrande (5-shot) |81.93|
|GSM8k (5-shot) |68.76|
Nous benchmark results:
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|--------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[Spaetzle-v69-7b](https://huggingface.co/cstr/Spaetzle-v69-7b)| 44.48| 75.84| 66.15| 46.59| 58.27|
### AGIEval
| Task |Version| Metric |Value| |Stderr|
|------------------------------|------:|--------|----:|---|-----:|
|agieval_aqua_rat | 0|acc |25.98|± | 2.76|
| | |acc_norm|23.62|± | 2.67|
|agieval_logiqa_en | 0|acc |39.78|± | 1.92|
| | |acc_norm|39.48|± | 1.92|
|agieval_lsat_ar | 0|acc |23.48|± | 2.80|
| | |acc_norm|23.91|± | 2.82|
|agieval_lsat_lr | 0|acc |50.00|± | 2.22|
| | |acc_norm|51.76|± | 2.21|
|agieval_lsat_rc | 0|acc |63.94|± | 2.93|
| | |acc_norm|64.31|± | 2.93|
|agieval_sat_en | 0|acc |76.70|± | 2.95|
| | |acc_norm|77.67|± | 2.91|
|agieval_sat_en_without_passage| 0|acc |46.12|± | 3.48|
| | |acc_norm|44.17|± | 3.47|
|agieval_sat_math | 0|acc |34.09|± | 3.20|
| | |acc_norm|30.91|± | 3.12|
Average: 44.48%
### GPT4All
| Task |Version| Metric |Value| |Stderr|
|-------------|------:|--------|----:|---|-----:|
|arc_challenge| 0|acc |63.23|± | 1.41|
| | |acc_norm|64.16|± | 1.40|
|arc_easy | 0|acc |85.90|± | 0.71|
| | |acc_norm|82.49|± | 0.78|
|boolq | 1|acc |87.80|± | 0.57|
|hellaswag | 0|acc |67.05|± | 0.47|
| | |acc_norm|85.19|± | 0.35|
|openbookqa | 0|acc |38.40|± | 2.18|
| | |acc_norm|48.40|± | 2.24|
|piqa | 0|acc |82.75|± | 0.88|
| | |acc_norm|84.28|± | 0.85|
|winogrande | 0|acc |78.53|± | 1.15|
Average: 75.84%
### TruthfulQA
| Task |Version|Metric|Value| |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc| 1|mc1 |50.67|± | 1.75|
| | |mc2 |66.15|± | 1.48|
Average: 66.15%
### Bigbench
| Task |Version| Metric |Value| |Stderr|
|------------------------------------------------|------:|---------------------|----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|56.84|± | 3.60|
|bigbench_date_understanding | 0|multiple_choice_grade|66.67|± | 2.46|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|40.70|± | 3.06|
|bigbench_geometric_shapes | 0|multiple_choice_grade|24.79|± | 2.28|
| | |exact_str_match |10.58|± | 1.63|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|31.00|± | 2.07|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|23.00|± | 1.59|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|58.00|± | 2.85|
|bigbench_movie_recommendation | 0|multiple_choice_grade|45.80|± | 2.23|
|bigbench_navigate | 0|multiple_choice_grade|52.10|± | 1.58|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|69.55|± | 1.03|
|bigbench_ruin_names | 0|multiple_choice_grade|48.88|± | 2.36|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|30.96|± | 1.46|
|bigbench_snarks | 0|multiple_choice_grade|73.48|± | 3.29|
|bigbench_sports_understanding | 0|multiple_choice_grade|74.14|± | 1.40|
|bigbench_temporal_sequences | 0|multiple_choice_grade|42.70|± | 1.56|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|23.60|± | 1.20|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|18.40|± | 0.93|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|58.00|± | 2.85|
Average: 46.59%
Average score: 58.27%
## 🧩 Merge Configuration
Spaetzle-v69-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [abideen/AlphaMonarch-dora](https://huggingface.co/abideen/AlphaMonarch-dora)
* [cstr/Spaetzle-v68-7b](https://huggingface.co/cstr/Spaetzle-v68-7b)
The merge tree in total involves the following original models:
- [abideen/AlphaMonarch-dora](https://huggingface.co/abideen/AlphaMonarch-dora)
- [mayflowergmbh/Wiedervereinigung-7b-dpo](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b-dpo)
- [flemmingmiguel/NeuDist-Ro-7B](https://huggingface.co/flemmingmiguel/NeuDist-Ro-7B)
- [ResplendentAI/Flora_DPO_7B](https://huggingface.co/ResplendentAI/Flora_DPO_7B)
- [yleo/EmertonMonarch-7B](https://huggingface.co/yleo/EmertonMonarch-7B)
- [occiglot/occiglot-7b-de-en-instruct](https://huggingface.co/occiglot/occiglot-7b-de-en-instruct)
- [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227)
- [DiscoResearch/DiscoLM_German_7b_v1](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1)
- [LeoLM/leo-mistral-hessianai-7b](https://huggingface.co/LeoLM/leo-mistral-hessianai-7b)
- [DRXD1000/Phoenix](https://huggingface.co/DRXD1000/Phoenix)
- [VAGOsolutions/SauerkrautLM-7b-v1-mistral](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral)
- [malteos/hermeo-7b](https://huggingface.co/malteos/hermeo-7b)
- [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
- [cognitivecomputations/openchat-3.5-0106-laser](https://huggingface.co/cognitivecomputations/openchat-3.5-0106-laser)
For this last merge:
```yaml
models:
- model: cstr/Spaetzle-v68-7b
# no parameters necessary for base model
- model: abideen/AlphaMonarch-dora
parameters:
density: 0.60
weight: 0.30
merge_method: dare_ties
base_model: cstr/Spaetzle-v68-7b
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
tokenizer_source: base
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "cstr/Spaetzle-v69-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"language": ["de", "en"], "license": "cc-by-nc-4.0", "tags": ["merge", "mergekit", "lazymergekit"], "base_model": ["abideen/AlphaMonarch-dora", "mayflowergmbh/Wiedervereinigung-7b-dpo", "flemmingmiguel/NeuDist-Ro-7B", "ResplendentAI/Flora_DPO_7B", "yleo/EmertonMonarch-7B", "occiglot/occiglot-7b-de-en-instruct", "OpenPipe/mistral-ft-optimized-1227", "DiscoResearch/DiscoLM_German_7b_v1", "LeoLM/leo-mistral-hessianai-7b", "DRXD1000/Phoenix", "VAGOsolutions/SauerkrautLM-7b-v1-mistral", "malteos/hermeo-7b", "FelixChao/WestSeverus-7B-DPO-v2", "cognitivecomputations/openchat-3.5-0106-laser"]} | cstr/Spaetzle-v69-7b | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"conversational",
"de",
"en",
"base_model:abideen/AlphaMonarch-dora",
"base_model:mayflowergmbh/Wiedervereinigung-7b-dpo",
"base_model:flemmingmiguel/NeuDist-Ro-7B",
"base_model:ResplendentAI/Flora_DPO_7B",
"base_model:yleo/EmertonMonarch-7B",
"base_model:occiglot/occiglot-7b-de-en-instruct",
"base_model:OpenPipe/mistral-ft-optimized-1227",
"base_model:DiscoResearch/DiscoLM_German_7b_v1",
"base_model:LeoLM/leo-mistral-hessianai-7b",
"base_model:DRXD1000/Phoenix",
"base_model:VAGOsolutions/SauerkrautLM-7b-v1-mistral",
"base_model:malteos/hermeo-7b",
"base_model:FelixChao/WestSeverus-7B-DPO-v2",
"base_model:cognitivecomputations/openchat-3.5-0106-laser",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T03:40:34+00:00 | [] | [
"de",
"en"
] | TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #conversational #de #en #base_model-abideen/AlphaMonarch-dora #base_model-mayflowergmbh/Wiedervereinigung-7b-dpo #base_model-flemmingmiguel/NeuDist-Ro-7B #base_model-ResplendentAI/Flora_DPO_7B #base_model-yleo/EmertonMonarch-7B #base_model-occiglot/occiglot-7b-de-en-instruct #base_model-OpenPipe/mistral-ft-optimized-1227 #base_model-DiscoResearch/DiscoLM_German_7b_v1 #base_model-LeoLM/leo-mistral-hessianai-7b #base_model-DRXD1000/Phoenix #base_model-VAGOsolutions/SauerkrautLM-7b-v1-mistral #base_model-malteos/hermeo-7b #base_model-FelixChao/WestSeverus-7B-DPO-v2 #base_model-cognitivecomputations/openchat-3.5-0106-laser #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Spaetzle-v69-7b
===============
This is a progressive (mostly dare-ties, but also slerp) merge with the intention of a suitable compromise for English and German local tasks.
There is also a 4q\_k\_m quantized GGUF.
It should work sufficiently well with ChatML prompt template (for all merged models should have seen ChatML prompts at least in DPO stage).
Evaluation
----------
Benchmark scores are not the possible optimum, as the model attempts a compromise with a number of parameters, like German language performance, instruction following, reasoning capabilities, robustness (so far, i did not encounter inserted tokens, e.g.), model licensing, and other criteria.
Nevertheless, they are not too bad:
It achieves (running quantized) in
* German EQ Bench: Score (v2\_de): 62.59 (Parseable: 171.0).
* English EQ Bench: Score (v2): 76.43 (Parseable: 171.0).
Open LLM Leaderboard Evaluation Results:
Detailed results can be found here
Nous benchmark results:
### AGIEval
Average: 44.48%
### GPT4All
Average: 75.84%
### TruthfulQA
Average: 66.15%
### Bigbench
Average: 46.59%
Average score: 58.27%
Merge Configuration
-------------------
Spaetzle-v69-7b is a merge of the following models using LazyMergekit:
* abideen/AlphaMonarch-dora
* cstr/Spaetzle-v68-7b
The merge tree in total involves the following original models:
* abideen/AlphaMonarch-dora
* mayflowergmbh/Wiedervereinigung-7b-dpo
* flemmingmiguel/NeuDist-Ro-7B
* ResplendentAI/Flora\_DPO\_7B
* yleo/EmertonMonarch-7B
* occiglot/occiglot-7b-de-en-instruct
* OpenPipe/mistral-ft-optimized-1227
* DiscoResearch/DiscoLM\_German\_7b\_v1
* LeoLM/leo-mistral-hessianai-7b
* DRXD1000/Phoenix
* VAGOsolutions/SauerkrautLM-7b-v1-mistral
* malteos/hermeo-7b
* FelixChao/WestSeverus-7B-DPO-v2
* cognitivecomputations/openchat-3.5-0106-laser
For this last merge:
Usage
-----
| [
"### AGIEval\n\n\n\nAverage: 44.48%",
"### GPT4All\n\n\n\nAverage: 75.84%",
"### TruthfulQA\n\n\n\nAverage: 66.15%",
"### Bigbench\n\n\n\nAverage: 46.59%\n\n\nAverage score: 58.27%\n\n\nMerge Configuration\n-------------------\n\n\nSpaetzle-v69-7b is a merge of the following models using LazyMergekit:\n\n\n* abideen/AlphaMonarch-dora\n* cstr/Spaetzle-v68-7b\n\n\nThe merge tree in total involves the following original models:\n\n\n* abideen/AlphaMonarch-dora\n* mayflowergmbh/Wiedervereinigung-7b-dpo\n* flemmingmiguel/NeuDist-Ro-7B\n* ResplendentAI/Flora\\_DPO\\_7B\n* yleo/EmertonMonarch-7B\n* occiglot/occiglot-7b-de-en-instruct\n* OpenPipe/mistral-ft-optimized-1227\n* DiscoResearch/DiscoLM\\_German\\_7b\\_v1\n* LeoLM/leo-mistral-hessianai-7b\n* DRXD1000/Phoenix\n* VAGOsolutions/SauerkrautLM-7b-v1-mistral\n* malteos/hermeo-7b\n* FelixChao/WestSeverus-7B-DPO-v2\n* cognitivecomputations/openchat-3.5-0106-laser\n\n\nFor this last merge:\n\n\nUsage\n-----"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #conversational #de #en #base_model-abideen/AlphaMonarch-dora #base_model-mayflowergmbh/Wiedervereinigung-7b-dpo #base_model-flemmingmiguel/NeuDist-Ro-7B #base_model-ResplendentAI/Flora_DPO_7B #base_model-yleo/EmertonMonarch-7B #base_model-occiglot/occiglot-7b-de-en-instruct #base_model-OpenPipe/mistral-ft-optimized-1227 #base_model-DiscoResearch/DiscoLM_German_7b_v1 #base_model-LeoLM/leo-mistral-hessianai-7b #base_model-DRXD1000/Phoenix #base_model-VAGOsolutions/SauerkrautLM-7b-v1-mistral #base_model-malteos/hermeo-7b #base_model-FelixChao/WestSeverus-7B-DPO-v2 #base_model-cognitivecomputations/openchat-3.5-0106-laser #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### AGIEval\n\n\n\nAverage: 44.48%",
"### GPT4All\n\n\n\nAverage: 75.84%",
"### TruthfulQA\n\n\n\nAverage: 66.15%",
"### Bigbench\n\n\n\nAverage: 46.59%\n\n\nAverage score: 58.27%\n\n\nMerge Configuration\n-------------------\n\n\nSpaetzle-v69-7b is a merge of the following models using LazyMergekit:\n\n\n* abideen/AlphaMonarch-dora\n* cstr/Spaetzle-v68-7b\n\n\nThe merge tree in total involves the following original models:\n\n\n* abideen/AlphaMonarch-dora\n* mayflowergmbh/Wiedervereinigung-7b-dpo\n* flemmingmiguel/NeuDist-Ro-7B\n* ResplendentAI/Flora\\_DPO\\_7B\n* yleo/EmertonMonarch-7B\n* occiglot/occiglot-7b-de-en-instruct\n* OpenPipe/mistral-ft-optimized-1227\n* DiscoResearch/DiscoLM\\_German\\_7b\\_v1\n* LeoLM/leo-mistral-hessianai-7b\n* DRXD1000/Phoenix\n* VAGOsolutions/SauerkrautLM-7b-v1-mistral\n* malteos/hermeo-7b\n* FelixChao/WestSeverus-7B-DPO-v2\n* cognitivecomputations/openchat-3.5-0106-laser\n\n\nFor this last merge:\n\n\nUsage\n-----"
] |
null | null |
# DavidAU/SOLAR-10B-OrcaDPO-Jawade-Q6_K-GGUF
This model was converted to GGUF format from [`bhavinjawade/SOLAR-10B-OrcaDPO-Jawade`](https://huggingface.co/bhavinjawade/SOLAR-10B-OrcaDPO-Jawade) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bhavinjawade/SOLAR-10B-OrcaDPO-Jawade) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/SOLAR-10B-OrcaDPO-Jawade-Q6_K-GGUF --model solar-10b-orcadpo-jawade.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/SOLAR-10B-OrcaDPO-Jawade-Q6_K-GGUF --model solar-10b-orcadpo-jawade.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m solar-10b-orcadpo-jawade.Q6_K.gguf -n 128
```
| {"license": "mit", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["Intel/orca_dpo_pairs"]} | DavidAU/SOLAR-10B-OrcaDPO-Jawade-Q6_K-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:Intel/orca_dpo_pairs",
"license:mit",
"region:us"
] | null | 2024-04-17T03:40:37+00:00 | [] | [] | TAGS
#gguf #llama-cpp #gguf-my-repo #dataset-Intel/orca_dpo_pairs #license-mit #region-us
|
# DavidAU/SOLAR-10B-OrcaDPO-Jawade-Q6_K-GGUF
This model was converted to GGUF format from 'bhavinjawade/SOLAR-10B-OrcaDPO-Jawade' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/SOLAR-10B-OrcaDPO-Jawade-Q6_K-GGUF\nThis model was converted to GGUF format from 'bhavinjawade/SOLAR-10B-OrcaDPO-Jawade' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #dataset-Intel/orca_dpo_pairs #license-mit #region-us \n",
"# DavidAU/SOLAR-10B-OrcaDPO-Jawade-Q6_K-GGUF\nThis model was converted to GGUF format from 'bhavinjawade/SOLAR-10B-OrcaDPO-Jawade' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K79me3-seqsight_65536_512_47M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6720
- F1 Score: 0.6799
- Accuracy: 0.6803
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.664 | 16.67 | 200 | 0.6401 | 0.6299 | 0.6304 |
| 0.5946 | 33.33 | 400 | 0.6389 | 0.6380 | 0.6488 |
| 0.5639 | 50.0 | 600 | 0.6426 | 0.6450 | 0.6446 |
| 0.5371 | 66.67 | 800 | 0.6412 | 0.6514 | 0.6560 |
| 0.5172 | 83.33 | 1000 | 0.6499 | 0.6546 | 0.6581 |
| 0.5038 | 100.0 | 1200 | 0.6540 | 0.6534 | 0.6585 |
| 0.4921 | 116.67 | 1400 | 0.6549 | 0.6640 | 0.6650 |
| 0.4839 | 133.33 | 1600 | 0.6570 | 0.6659 | 0.6682 |
| 0.4754 | 150.0 | 1800 | 0.6598 | 0.6644 | 0.6654 |
| 0.4686 | 166.67 | 2000 | 0.6678 | 0.6695 | 0.6709 |
| 0.4616 | 183.33 | 2200 | 0.6607 | 0.6705 | 0.6709 |
| 0.4551 | 200.0 | 2400 | 0.6711 | 0.6593 | 0.6637 |
| 0.4511 | 216.67 | 2600 | 0.6789 | 0.6687 | 0.6685 |
| 0.4417 | 233.33 | 2800 | 0.6767 | 0.6714 | 0.6716 |
| 0.4368 | 250.0 | 3000 | 0.6887 | 0.6732 | 0.6737 |
| 0.4316 | 266.67 | 3200 | 0.6859 | 0.6682 | 0.6709 |
| 0.4266 | 283.33 | 3400 | 0.7035 | 0.6705 | 0.6706 |
| 0.4209 | 300.0 | 3600 | 0.7060 | 0.6617 | 0.6647 |
| 0.415 | 316.67 | 3800 | 0.7069 | 0.6694 | 0.6692 |
| 0.4083 | 333.33 | 4000 | 0.7094 | 0.6644 | 0.6644 |
| 0.4022 | 350.0 | 4200 | 0.7398 | 0.6621 | 0.6640 |
| 0.3967 | 366.67 | 4400 | 0.7386 | 0.6601 | 0.6623 |
| 0.3896 | 383.33 | 4600 | 0.7477 | 0.6668 | 0.6668 |
| 0.3849 | 400.0 | 4800 | 0.7197 | 0.6528 | 0.6543 |
| 0.3791 | 416.67 | 5000 | 0.7397 | 0.6602 | 0.6619 |
| 0.3744 | 433.33 | 5200 | 0.7433 | 0.6605 | 0.6616 |
| 0.3684 | 450.0 | 5400 | 0.7545 | 0.6619 | 0.6637 |
| 0.3626 | 466.67 | 5600 | 0.7832 | 0.6650 | 0.6678 |
| 0.3596 | 483.33 | 5800 | 0.7617 | 0.6638 | 0.6664 |
| 0.3536 | 500.0 | 6000 | 0.7507 | 0.6609 | 0.6619 |
| 0.3519 | 516.67 | 6200 | 0.7676 | 0.6641 | 0.6650 |
| 0.3473 | 533.33 | 6400 | 0.7612 | 0.6642 | 0.6657 |
| 0.3437 | 550.0 | 6600 | 0.7850 | 0.6601 | 0.6616 |
| 0.3402 | 566.67 | 6800 | 0.7865 | 0.6602 | 0.6612 |
| 0.3379 | 583.33 | 7000 | 0.8045 | 0.6598 | 0.6609 |
| 0.3344 | 600.0 | 7200 | 0.7939 | 0.6596 | 0.6612 |
| 0.3309 | 616.67 | 7400 | 0.7899 | 0.6598 | 0.6616 |
| 0.3293 | 633.33 | 7600 | 0.7791 | 0.6599 | 0.6602 |
| 0.3248 | 650.0 | 7800 | 0.7812 | 0.6588 | 0.6598 |
| 0.3227 | 666.67 | 8000 | 0.8036 | 0.6586 | 0.6605 |
| 0.3219 | 683.33 | 8200 | 0.8220 | 0.6582 | 0.6598 |
| 0.3208 | 700.0 | 8400 | 0.8077 | 0.6596 | 0.6605 |
| 0.3183 | 716.67 | 8600 | 0.8185 | 0.6566 | 0.6585 |
| 0.3172 | 733.33 | 8800 | 0.8053 | 0.6577 | 0.6595 |
| 0.3165 | 750.0 | 9000 | 0.8075 | 0.6628 | 0.6633 |
| 0.3145 | 766.67 | 9200 | 0.8159 | 0.6595 | 0.6612 |
| 0.3133 | 783.33 | 9400 | 0.8092 | 0.6621 | 0.6633 |
| 0.3126 | 800.0 | 9600 | 0.8099 | 0.6601 | 0.6616 |
| 0.3124 | 816.67 | 9800 | 0.8129 | 0.6610 | 0.6626 |
| 0.3128 | 833.33 | 10000 | 0.8149 | 0.6616 | 0.6633 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_65536_512_47M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_65536_512_47M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-04-17T03:41:03+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H3K79me3-seqsight\_65536\_512\_47M-L32\_all
=====================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H3K79me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6720
* F1 Score: 0.6799
* Accuracy: 0.6803
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me1-seqsight_65536_512_47M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8106
- F1 Score: 0.5857
- Accuracy: 0.5900
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 1536
- eval_batch_size: 1536
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6793 | 11.76 | 200 | 0.6840 | 0.5431 | 0.5745 |
| 0.6386 | 23.53 | 400 | 0.6913 | 0.5605 | 0.5773 |
| 0.6181 | 35.29 | 600 | 0.6927 | 0.5853 | 0.5862 |
| 0.5987 | 47.06 | 800 | 0.7023 | 0.5697 | 0.5836 |
| 0.581 | 58.82 | 1000 | 0.7206 | 0.5861 | 0.5871 |
| 0.5677 | 70.59 | 1200 | 0.7308 | 0.5693 | 0.5777 |
| 0.5602 | 82.35 | 1400 | 0.7299 | 0.5859 | 0.5865 |
| 0.5539 | 94.12 | 1600 | 0.7222 | 0.5793 | 0.5821 |
| 0.549 | 105.88 | 1800 | 0.7206 | 0.5816 | 0.5821 |
| 0.5446 | 117.65 | 2000 | 0.7444 | 0.5780 | 0.5792 |
| 0.5401 | 129.41 | 2200 | 0.7482 | 0.5816 | 0.5840 |
| 0.5365 | 141.18 | 2400 | 0.7514 | 0.5778 | 0.5786 |
| 0.5328 | 152.94 | 2600 | 0.7572 | 0.5820 | 0.5818 |
| 0.5293 | 164.71 | 2800 | 0.7783 | 0.5797 | 0.5840 |
| 0.5264 | 176.47 | 3000 | 0.7827 | 0.5748 | 0.5824 |
| 0.524 | 188.24 | 3200 | 0.7527 | 0.5837 | 0.5836 |
| 0.52 | 200.0 | 3400 | 0.7728 | 0.5769 | 0.5824 |
| 0.5155 | 211.76 | 3600 | 0.7585 | 0.5824 | 0.5821 |
| 0.5121 | 223.53 | 3800 | 0.7604 | 0.5833 | 0.5862 |
| 0.5072 | 235.29 | 4000 | 0.7908 | 0.5737 | 0.5846 |
| 0.5029 | 247.06 | 4200 | 0.7811 | 0.5829 | 0.5865 |
| 0.4997 | 258.82 | 4400 | 0.7751 | 0.5847 | 0.5878 |
| 0.495 | 270.59 | 4600 | 0.7709 | 0.5844 | 0.5871 |
| 0.4896 | 282.35 | 4800 | 0.7867 | 0.5791 | 0.5789 |
| 0.4853 | 294.12 | 5000 | 0.8053 | 0.5795 | 0.5827 |
| 0.4806 | 305.88 | 5200 | 0.8140 | 0.5838 | 0.5855 |
| 0.475 | 317.65 | 5400 | 0.7949 | 0.5853 | 0.5855 |
| 0.4725 | 329.41 | 5600 | 0.8253 | 0.5798 | 0.5836 |
| 0.4675 | 341.18 | 5800 | 0.8024 | 0.5881 | 0.5890 |
| 0.4623 | 352.94 | 6000 | 0.8352 | 0.5908 | 0.5947 |
| 0.4576 | 364.71 | 6200 | 0.8424 | 0.5804 | 0.5836 |
| 0.4553 | 376.47 | 6400 | 0.8405 | 0.5854 | 0.5865 |
| 0.4504 | 388.24 | 6600 | 0.8300 | 0.5829 | 0.5840 |
| 0.4467 | 400.0 | 6800 | 0.8658 | 0.5840 | 0.5836 |
| 0.4454 | 411.76 | 7000 | 0.8697 | 0.5800 | 0.5811 |
| 0.4415 | 423.53 | 7200 | 0.8729 | 0.5840 | 0.5859 |
| 0.4371 | 435.29 | 7400 | 0.8727 | 0.5820 | 0.5843 |
| 0.4363 | 447.06 | 7600 | 0.8877 | 0.5835 | 0.5887 |
| 0.4331 | 458.82 | 7800 | 0.8626 | 0.5825 | 0.5855 |
| 0.4297 | 470.59 | 8000 | 0.8745 | 0.5878 | 0.5896 |
| 0.4298 | 482.35 | 8200 | 0.8671 | 0.5861 | 0.5871 |
| 0.4254 | 494.12 | 8400 | 0.8759 | 0.5845 | 0.5874 |
| 0.4258 | 505.88 | 8600 | 0.8767 | 0.5823 | 0.5852 |
| 0.4241 | 517.65 | 8800 | 0.8787 | 0.5822 | 0.5836 |
| 0.4211 | 529.41 | 9000 | 0.8842 | 0.5850 | 0.5871 |
| 0.4212 | 541.18 | 9200 | 0.8862 | 0.5850 | 0.5871 |
| 0.4192 | 552.94 | 9400 | 0.8811 | 0.5834 | 0.5852 |
| 0.4189 | 564.71 | 9600 | 0.8915 | 0.5810 | 0.5830 |
| 0.4186 | 576.47 | 9800 | 0.8862 | 0.5814 | 0.5843 |
| 0.417 | 588.24 | 10000 | 0.8846 | 0.5828 | 0.5852 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_65536_512_47M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_65536_512_47M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-04-17T03:41:52+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H3K4me1-seqsight\_65536\_512\_47M-L32\_all
====================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8106
* F1 Score: 0.5857
* Accuracy: 0.5900
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 1536
* eval\_batch\_size: 1536
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | mohdumar/gpt2-untied | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T03:41:57+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
# DavidAU/nectororca-solar10b-jawade-Q6_K-GGUF
This model was converted to GGUF format from [`bhavinjawade/nectororca-solar10b-jawade`](https://huggingface.co/bhavinjawade/nectororca-solar10b-jawade) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bhavinjawade/nectororca-solar10b-jawade) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/nectororca-solar10b-jawade-Q6_K-GGUF --model nectororca-solar10b-jawade.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/nectororca-solar10b-jawade-Q6_K-GGUF --model nectororca-solar10b-jawade.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m nectororca-solar10b-jawade.Q6_K.gguf -n 128
```
| {"license": "mit", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["Intel/orca_dpo_pairs"]} | DavidAU/nectororca-solar10b-jawade-Q6_K-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:Intel/orca_dpo_pairs",
"license:mit",
"region:us"
] | null | 2024-04-17T03:42:18+00:00 | [] | [] | TAGS
#gguf #llama-cpp #gguf-my-repo #dataset-Intel/orca_dpo_pairs #license-mit #region-us
|
# DavidAU/nectororca-solar10b-jawade-Q6_K-GGUF
This model was converted to GGUF format from 'bhavinjawade/nectororca-solar10b-jawade' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/nectororca-solar10b-jawade-Q6_K-GGUF\nThis model was converted to GGUF format from 'bhavinjawade/nectororca-solar10b-jawade' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #dataset-Intel/orca_dpo_pairs #license-mit #region-us \n",
"# DavidAU/nectororca-solar10b-jawade-Q6_K-GGUF\nThis model was converted to GGUF format from 'bhavinjawade/nectororca-solar10b-jawade' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [mistral-community/Mistral-7B-v0.2](https://huggingface.co/mistral-community/Mistral-7B-v0.2) as a base.
### Models Merged
The following models were included in the merge:
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
* [Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Nexusflow/Starling-LM-7B-beta
- model: openchat/openchat-3.5-0106
- model: openchat/openchat-3.5-0106
merge_method: model_stock
base_model: mistral-community/Mistral-7B-v0.2
dtype: bfloat16
``` | {"license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["openchat/openchat-3.5-0106", "mistral-community/Mistral-7B-v0.2", "Nexusflow/Starling-LM-7B-beta"]} | mayacinka/Open-StaMis-stock | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:openchat/openchat-3.5-0106",
"base_model:mistral-community/Mistral-7B-v0.2",
"base_model:Nexusflow/Starling-LM-7B-beta",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T03:45:08+00:00 | [
"2403.19522"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2403.19522 #base_model-openchat/openchat-3.5-0106 #base_model-mistral-community/Mistral-7B-v0.2 #base_model-Nexusflow/Starling-LM-7B-beta #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the Model Stock merge method using mistral-community/Mistral-7B-v0.2 as a base.
### Models Merged
The following models were included in the merge:
* openchat/openchat-3.5-0106
* Nexusflow/Starling-LM-7B-beta
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the Model Stock merge method using mistral-community/Mistral-7B-v0.2 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* openchat/openchat-3.5-0106\n* Nexusflow/Starling-LM-7B-beta",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2403.19522 #base_model-openchat/openchat-3.5-0106 #base_model-mistral-community/Mistral-7B-v0.2 #base_model-Nexusflow/Starling-LM-7B-beta #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the Model Stock merge method using mistral-community/Mistral-7B-v0.2 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* openchat/openchat-3.5-0106\n* Nexusflow/Starling-LM-7B-beta",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | null |
# DavidAU/Sakura-SOLRCA-Math-Instruct-DPO-v2-Q6_K-GGUF
This model was converted to GGUF format from [`kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v2`](https://huggingface.co/kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Sakura-SOLRCA-Math-Instruct-DPO-v2-Q6_K-GGUF --model sakura-solrca-math-instruct-dpo-v2.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Sakura-SOLRCA-Math-Instruct-DPO-v2-Q6_K-GGUF --model sakura-solrca-math-instruct-dpo-v2.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m sakura-solrca-math-instruct-dpo-v2.Q6_K.gguf -n 128
```
| {"language": ["en"], "license": "cc-by-nc-sa-4.0", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["kyujinpy/orca_math_dpo"], "pipeline_tag": "text-generation", "model-index": [{"name": "Sakura-SOLRCA-Math-Instruct-DPO-v2", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 71.25, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 88.52, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 66.13, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 72.16}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 83.03, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 63.91, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v2", "name": "Open LLM Leaderboard"}}]}]} | DavidAU/Sakura-SOLRCA-Math-Instruct-DPO-v2-Q6_K-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:kyujinpy/orca_math_dpo",
"license:cc-by-nc-sa-4.0",
"model-index",
"region:us"
] | null | 2024-04-17T03:46:55+00:00 | [] | [
"en"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #text-generation #en #dataset-kyujinpy/orca_math_dpo #license-cc-by-nc-sa-4.0 #model-index #region-us
|
# DavidAU/Sakura-SOLRCA-Math-Instruct-DPO-v2-Q6_K-GGUF
This model was converted to GGUF format from 'kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v2' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Sakura-SOLRCA-Math-Instruct-DPO-v2-Q6_K-GGUF\nThis model was converted to GGUF format from 'kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #text-generation #en #dataset-kyujinpy/orca_math_dpo #license-cc-by-nc-sa-4.0 #model-index #region-us \n",
"# DavidAU/Sakura-SOLRCA-Math-Instruct-DPO-v2-Q6_K-GGUF\nThis model was converted to GGUF format from 'kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | null |
# DavidAU/Sakura-SOLAR-Instruct-Q6_K-GGUF
This model was converted to GGUF format from [`kyujinpy/Sakura-SOLAR-Instruct`](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Sakura-SOLAR-Instruct-Q6_K-GGUF --model sakura-solar-instruct.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Sakura-SOLAR-Instruct-Q6_K-GGUF --model sakura-solar-instruct.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m sakura-solar-instruct.Q6_K.gguf -n 128
```
| {"language": ["en"], "license": "cc-by-nc-sa-4.0", "tags": ["merge", "llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation", "model-index": [{"name": "Sakura-SOLAR-Instruct", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 70.99, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 88.42, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 66.33, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 71.79}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 83.66, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 65.2, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct", "name": "Open LLM Leaderboard"}}]}]} | DavidAU/Sakura-SOLAR-Instruct-Q6_K-GGUF | null | [
"gguf",
"merge",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"license:cc-by-nc-sa-4.0",
"model-index",
"region:us"
] | null | 2024-04-17T03:48:12+00:00 | [] | [
"en"
] | TAGS
#gguf #merge #llama-cpp #gguf-my-repo #text-generation #en #license-cc-by-nc-sa-4.0 #model-index #region-us
|
# DavidAU/Sakura-SOLAR-Instruct-Q6_K-GGUF
This model was converted to GGUF format from 'kyujinpy/Sakura-SOLAR-Instruct' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Sakura-SOLAR-Instruct-Q6_K-GGUF\nThis model was converted to GGUF format from 'kyujinpy/Sakura-SOLAR-Instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #merge #llama-cpp #gguf-my-repo #text-generation #en #license-cc-by-nc-sa-4.0 #model-index #region-us \n",
"# DavidAU/Sakura-SOLAR-Instruct-Q6_K-GGUF\nThis model was converted to GGUF format from 'kyujinpy/Sakura-SOLAR-Instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral7binstruct_summarize-v2
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2043 | 0.1 | 10 | 1.8384 |
| 1.8896 | 0.19 | 20 | 1.7382 |
| 1.8288 | 0.29 | 30 | 1.6616 |
| 1.6991 | 0.38 | 40 | 1.6320 |
| 1.7721 | 0.48 | 50 | 1.6185 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "mistral7binstruct_summarize-v2", "results": []}]} | asahikuroki222/mistral7binstruct_summarize-v2 | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-04-17T03:49:08+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
| mistral7binstruct\_summarize-v2
===============================
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset.
It achieves the following results on the evaluation set:
* Loss: 1.6185
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 1
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: constant
* lr\_scheduler\_warmup\_steps: 0.03
* training\_steps: 50
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.3
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 50",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 50",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | null |
# DavidAU/Sakura-SOLAR-Instruct-DPO-v2-Q6_K-GGUF
This model was converted to GGUF format from [`kyujinpy/Sakura-SOLAR-Instruct-DPO-v2`](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct-DPO-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct-DPO-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Sakura-SOLAR-Instruct-DPO-v2-Q6_K-GGUF --model sakura-solar-instruct-dpo-v2.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Sakura-SOLAR-Instruct-DPO-v2-Q6_K-GGUF --model sakura-solar-instruct-dpo-v2.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m sakura-solar-instruct-dpo-v2.Q6_K.gguf -n 128
```
| {"language": ["en"], "license": "cc-by-nc-sa-4.0", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["argilla/distilabel-math-preference-dpo"], "pipeline_tag": "text-generation", "model-index": [{"name": "Sakura-SOLAR-Instruct-DPO-v2", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 70.9, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct-DPO-v2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 88.41, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct-DPO-v2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 66.48, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct-DPO-v2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 71.86}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct-DPO-v2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 83.43, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct-DPO-v2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 63.76, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/Sakura-SOLAR-Instruct-DPO-v2", "name": "Open LLM Leaderboard"}}]}]} | DavidAU/Sakura-SOLAR-Instruct-DPO-v2-Q6_K-GGUF | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:argilla/distilabel-math-preference-dpo",
"license:cc-by-nc-sa-4.0",
"model-index",
"region:us"
] | null | 2024-04-17T03:49:28+00:00 | [] | [
"en"
] | TAGS
#gguf #llama-cpp #gguf-my-repo #text-generation #en #dataset-argilla/distilabel-math-preference-dpo #license-cc-by-nc-sa-4.0 #model-index #region-us
|
# DavidAU/Sakura-SOLAR-Instruct-DPO-v2-Q6_K-GGUF
This model was converted to GGUF format from 'kyujinpy/Sakura-SOLAR-Instruct-DPO-v2' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Sakura-SOLAR-Instruct-DPO-v2-Q6_K-GGUF\nThis model was converted to GGUF format from 'kyujinpy/Sakura-SOLAR-Instruct-DPO-v2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #llama-cpp #gguf-my-repo #text-generation #en #dataset-argilla/distilabel-math-preference-dpo #license-cc-by-nc-sa-4.0 #model-index #region-us \n",
"# DavidAU/Sakura-SOLAR-Instruct-DPO-v2-Q6_K-GGUF\nThis model was converted to GGUF format from 'kyujinpy/Sakura-SOLAR-Instruct-DPO-v2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers |
We built this modle based on princeton-nlp/Sheared-LLaMA-1.3B.
We finetuned the model using korean wiki, ko alpaca with Lora.
Please see following information about princeton-nlp/Sheared-LLaMA-1.3B.
**Paper**: [https://arxiv.org/pdf/2310.06694.pdf](https://arxiv.org/pdf/2310.06694.pdf)
**Code**: https://github.com/princeton-nlp/LLM-Shearing
**Models**: [Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B), [Sheared-LLaMA-2.7B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B)
**Pruned Models without Continued Pre-training**: [Sheared-LLaMA-1.3B-Pruned](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B-Pruned), [Sheared-LLaMA-2.7B-Pruned](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B-Pruned)
**Instruction-tuned Models**: [Sheared-LLaMA-1.3B-ShareGPT](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B-ShareGPT), [Sheared-LLaMA-2.7B-ShareGPT](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B-ShareGPT)
**License**: Must comply with license of Llama2 since it's a model derived from Llama2.
---
Sheared-LLaMA-1.3B is a model pruned and further pre-trained from [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf). We dynamically load data from different domains in the [RedPajama dataset](https://github.com/togethercomputer/RedPajama-Data) to prune and contune pre-train the model. We use 0.4B tokens for pruning and 50B tokens for continued pre-training the pruned model. This model can be loaded with HuggingFace via
```
model = AutoModelForCausalLM.from_pretrained("princeton-nlp/Sheared-LLaMA-1.3B")
```
- Smaller-scale
- Same vocabulary as LLaMA1 and LLaMA2
- Derived with a budget of 50B tokens by utilizing existing strong LLMs
## Downstream Tasks
We evaluate on an extensive set of downstream tasks including reasoning, reading comprehension, language modeling and knowledge intensive tasks. Our Sheared-LLaMA models outperform existing large language models.
| Model | # Pre-training Tokens | Average Performance |
| ------------------- | --------------------- | ------------------- |
| LLaMA2-7B | 2T | 64.6 |
**1.3B**
| Model | # Pre-training Tokens | Average Performance |
| ------------------- | --------------------- | ------------------- |
| OPT-1.3B | 300B | 48.2 |
| Pythia-1.4B | 300B | 48.9 |
| **Sheared-LLaMA-1.3B** | **50B** | **51.0** |
**3B**
| Model | # Pre-training Tokens | Average Performance |
| ------------------- | --------------------- | ------------------- |
| OPT-2.7B | 300B | 51.4 |
| Pythia-2.8B | 300B | 52.5 |
| INCITE-Base-3B | 800B | 54.7 |
| Open-LLaMA-3B-v1 | 1T | 55.1 |
| Open-LLaMA-3B-v2 | 1T | 55.7 |
| Sheared-LLaMA-2.7B | 50B | 56.7 |
## Bibtex
```
@article{xia2023sheared,
title={Sheared llama: Accelerating language model pre-training via structured pruning},
author={Xia, Mengzhou and Gao, Tianyu and Zeng, Zhiyuan and Chen, Danqi},
journal={arXiv preprint arXiv:2310.06694},
year={2023}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_princeton-nlp__Sheared-LLaMA-1.3B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 31.47 |
| ARC (25-shot) | 32.85 |
| HellaSwag (10-shot) | 60.91 |
| MMLU (5-shot) | 25.71 |
| TruthfulQA (0-shot) | 37.14 |
| Winogrande (5-shot) | 58.64 |
| GSM8K (5-shot) | 0.45 |
| DROP (3-shot) | 4.56 |
| {"license": "apache-2.0"} | ahnyeonchan/legendary-river-koalpaca | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2310.06694",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T03:50:32+00:00 | [
"2310.06694"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #arxiv-2310.06694 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| We built this modle based on princeton-nlp/Sheared-LLaMA-1.3B.
We finetuned the model using korean wiki, ko alpaca with Lora.
Please see following information about princeton-nlp/Sheared-LLaMA-1.3B.
Paper: URL
Code: URL
Models: Sheared-LLaMA-1.3B, Sheared-LLaMA-2.7B
Pruned Models without Continued Pre-training: Sheared-LLaMA-1.3B-Pruned, Sheared-LLaMA-2.7B-Pruned
Instruction-tuned Models: Sheared-LLaMA-1.3B-ShareGPT, Sheared-LLaMA-2.7B-ShareGPT
License: Must comply with license of Llama2 since it's a model derived from Llama2.
---
Sheared-LLaMA-1.3B is a model pruned and further pre-trained from meta-llama/Llama-2-7b-hf. We dynamically load data from different domains in the RedPajama dataset to prune and contune pre-train the model. We use 0.4B tokens for pruning and 50B tokens for continued pre-training the pruned model. This model can be loaded with HuggingFace via
* Smaller-scale
* Same vocabulary as LLaMA1 and LLaMA2
* Derived with a budget of 50B tokens by utilizing existing strong LLMs
Downstream Tasks
----------------
We evaluate on an extensive set of downstream tasks including reasoning, reading comprehension, language modeling and knowledge intensive tasks. Our Sheared-LLaMA models outperform existing large language models.
Model: LLaMA2-7B, # Pre-training Tokens: 2T, Average Performance: 64.6
1.3B
Model: OPT-1.3B, # Pre-training Tokens: 300B, Average Performance: 48.2
Model: Pythia-1.4B, # Pre-training Tokens: 300B, Average Performance: 48.9
Model: Sheared-LLaMA-1.3B, # Pre-training Tokens: 50B, Average Performance: 51.0
3B
Model: OPT-2.7B, # Pre-training Tokens: 300B, Average Performance: 51.4
Model: Pythia-2.8B, # Pre-training Tokens: 300B, Average Performance: 52.5
Model: INCITE-Base-3B, # Pre-training Tokens: 800B, Average Performance: 54.7
Model: Open-LLaMA-3B-v1, # Pre-training Tokens: 1T, Average Performance: 55.1
Model: Open-LLaMA-3B-v2, # Pre-training Tokens: 1T, Average Performance: 55.7
Model: Sheared-LLaMA-2.7B, # Pre-training Tokens: 50B, Average Performance: 56.7
Bibtex
------
Open LLM Leaderboard Evaluation Results
=======================================
Detailed results can be found here
| [
"# Pre-training Tokens: 2T, Average Performance: 64.6\n\n\n1.3B\n\n\nModel: OPT-1.3B, # Pre-training Tokens: 300B, Average Performance: 48.2\nModel: Pythia-1.4B, # Pre-training Tokens: 300B, Average Performance: 48.9\nModel: Sheared-LLaMA-1.3B, # Pre-training Tokens: 50B, Average Performance: 51.0\n\n\n3B\n\n\nModel: OPT-2.7B, # Pre-training Tokens: 300B, Average Performance: 51.4\nModel: Pythia-2.8B, # Pre-training Tokens: 300B, Average Performance: 52.5\nModel: INCITE-Base-3B, # Pre-training Tokens: 800B, Average Performance: 54.7\nModel: Open-LLaMA-3B-v1, # Pre-training Tokens: 1T, Average Performance: 55.1\nModel: Open-LLaMA-3B-v2, # Pre-training Tokens: 1T, Average Performance: 55.7\nModel: Sheared-LLaMA-2.7B, # Pre-training Tokens: 50B, Average Performance: 56.7\n\n\nBibtex\n------\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-2310.06694 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Pre-training Tokens: 2T, Average Performance: 64.6\n\n\n1.3B\n\n\nModel: OPT-1.3B, # Pre-training Tokens: 300B, Average Performance: 48.2\nModel: Pythia-1.4B, # Pre-training Tokens: 300B, Average Performance: 48.9\nModel: Sheared-LLaMA-1.3B, # Pre-training Tokens: 50B, Average Performance: 51.0\n\n\n3B\n\n\nModel: OPT-2.7B, # Pre-training Tokens: 300B, Average Performance: 51.4\nModel: Pythia-2.8B, # Pre-training Tokens: 300B, Average Performance: 52.5\nModel: INCITE-Base-3B, # Pre-training Tokens: 800B, Average Performance: 54.7\nModel: Open-LLaMA-3B-v1, # Pre-training Tokens: 1T, Average Performance: 55.1\nModel: Open-LLaMA-3B-v2, # Pre-training Tokens: 1T, Average Performance: 55.7\nModel: Sheared-LLaMA-2.7B, # Pre-training Tokens: 50B, Average Performance: 56.7\n\n\nBibtex\n------\n\n\nOpen LLM Leaderboard Evaluation Results\n=======================================\n\n\nDetailed results can be found here"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_shp2_dpo9
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2776
- Rewards/chosen: -1.1246
- Rewards/rejected: -1.9364
- Rewards/accuracies: 0.4900
- Rewards/margins: 0.8117
- Logps/rejected: -216.7705
- Logps/chosen: -231.7811
- Logits/rejected: -0.9117
- Logits/chosen: -0.9723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.0604 | 2.67 | 100 | 1.8181 | 7.7452 | 7.8839 | 0.4900 | -0.1387 | -205.8591 | -221.9257 | -0.8827 | -0.9016 |
| 0.0268 | 5.33 | 200 | 2.9688 | -2.8174 | -2.9847 | 0.4800 | 0.1673 | -217.9353 | -233.6619 | -1.1131 | -1.1645 |
| 0.0069 | 8.0 | 300 | 3.0520 | 6.6739 | 6.0279 | 0.5600 | 0.6459 | -207.9212 | -223.1161 | -0.9294 | -1.0100 |
| 0.0 | 10.67 | 400 | 3.2909 | -1.1251 | -1.8955 | 0.4900 | 0.7704 | -216.7250 | -231.7816 | -0.9109 | -0.9719 |
| 0.0 | 13.33 | 500 | 3.2845 | -1.1008 | -1.9104 | 0.5 | 0.8096 | -216.7416 | -231.7545 | -0.9109 | -0.9718 |
| 0.0 | 16.0 | 600 | 3.3090 | -1.1249 | -1.9231 | 0.4900 | 0.7983 | -216.7558 | -231.7813 | -0.9112 | -0.9722 |
| 0.0 | 18.67 | 700 | 3.2953 | -1.1118 | -1.9182 | 0.4900 | 0.8063 | -216.7503 | -231.7668 | -0.9116 | -0.9723 |
| 0.0 | 21.33 | 800 | 3.2821 | -1.1048 | -1.9227 | 0.4900 | 0.8179 | -216.7553 | -231.7590 | -0.9116 | -0.9726 |
| 0.0 | 24.0 | 900 | 3.2731 | -1.1170 | -1.9616 | 0.4900 | 0.8445 | -216.7985 | -231.7726 | -0.9111 | -0.9723 |
| 0.0 | 26.67 | 1000 | 3.2776 | -1.1246 | -1.9364 | 0.4900 | 0.8117 | -216.7705 | -231.7811 | -0.9117 | -0.9723 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "model_shp2_dpo9", "results": []}]} | guoyu-zhang/model_shp2_dpo9 | null | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-17T03:50:59+00:00 | [] | [] | TAGS
#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
| model\_shp2\_dpo9
=================
This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 3.2776
* Rewards/chosen: -1.1246
* Rewards/rejected: -1.9364
* Rewards/accuracies: 0.4900
* Rewards/margins: 0.8117
* Logps/rejected: -216.7705
* Logps/chosen: -231.7811
* Logits/rejected: -0.9117
* Logits/chosen: -0.9723
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 4
* eval\_batch\_size: 1
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 100
* training\_steps: 1000
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.1
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | adediu25/implicit-bert-all | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:53:17+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [mistral-community/Mistral-7B-v0.2](https://huggingface.co/mistral-community/Mistral-7B-v0.2) as a base.
### Models Merged
The following models were included in the merge:
* [Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta)
* [openchat/openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210)
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
* [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Nexusflow/Starling-LM-7B-beta
- model: openchat/openchat-3.5-0106
- model: openchat/openchat-3.5-1210
- model: berkeley-nest/Starling-LM-7B-alpha
merge_method: model_stock
base_model: mistral-community/Mistral-7B-v0.2
dtype: bfloat16
``` | {"license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Nexusflow/Starling-LM-7B-beta", "openchat/openchat-3.5-1210", "openchat/openchat-3.5-0106", "mistral-community/Mistral-7B-v0.2", "berkeley-nest/Starling-LM-7B-alpha"]} | mayacinka/Open-StaMis-v02-stock | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:Nexusflow/Starling-LM-7B-beta",
"base_model:openchat/openchat-3.5-1210",
"base_model:openchat/openchat-3.5-0106",
"base_model:mistral-community/Mistral-7B-v0.2",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T03:54:03+00:00 | [
"2403.19522"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2403.19522 #base_model-Nexusflow/Starling-LM-7B-beta #base_model-openchat/openchat-3.5-1210 #base_model-openchat/openchat-3.5-0106 #base_model-mistral-community/Mistral-7B-v0.2 #base_model-berkeley-nest/Starling-LM-7B-alpha #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the Model Stock merge method using mistral-community/Mistral-7B-v0.2 as a base.
### Models Merged
The following models were included in the merge:
* Nexusflow/Starling-LM-7B-beta
* openchat/openchat-3.5-1210
* openchat/openchat-3.5-0106
* berkeley-nest/Starling-LM-7B-alpha
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the Model Stock merge method using mistral-community/Mistral-7B-v0.2 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* Nexusflow/Starling-LM-7B-beta\n* openchat/openchat-3.5-1210\n* openchat/openchat-3.5-0106\n* berkeley-nest/Starling-LM-7B-alpha",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2403.19522 #base_model-Nexusflow/Starling-LM-7B-beta #base_model-openchat/openchat-3.5-1210 #base_model-openchat/openchat-3.5-0106 #base_model-mistral-community/Mistral-7B-v0.2 #base_model-berkeley-nest/Starling-LM-7B-alpha #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the Model Stock merge method using mistral-community/Mistral-7B-v0.2 as a base.",
"### Models Merged\n\nThe following models were included in the merge:\n* Nexusflow/Starling-LM-7B-beta\n* openchat/openchat-3.5-1210\n* openchat/openchat-3.5-0106\n* berkeley-nest/Starling-LM-7B-alpha",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | transformers |
# DavidAU/SOLAR-Platypus-10.7B-v2-Q6_K-GGUF
This model was converted to GGUF format from [`kyujinpy/SOLAR-Platypus-10.7B-v2`](https://huggingface.co/kyujinpy/SOLAR-Platypus-10.7B-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/kyujinpy/SOLAR-Platypus-10.7B-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/SOLAR-Platypus-10.7B-v2-Q6_K-GGUF --model solar-platypus-10.7b-v2.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/SOLAR-Platypus-10.7B-v2-Q6_K-GGUF --model solar-platypus-10.7b-v2.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m solar-platypus-10.7b-v2.Q6_K.gguf -n 128
```
| {"language": ["en"], "license": "cc-by-nc-sa-4.0", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["garage-bAInd/Open-Platypus"], "pipeline_tag": "text-generation"} | DavidAU/SOLAR-Platypus-10.7B-v2-Q6_K-GGUF | null | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:garage-bAInd/Open-Platypus",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:54:58+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #llama-cpp #gguf-my-repo #text-generation #en #dataset-garage-bAInd/Open-Platypus #license-cc-by-nc-sa-4.0 #endpoints_compatible #region-us
|
# DavidAU/SOLAR-Platypus-10.7B-v2-Q6_K-GGUF
This model was converted to GGUF format from 'kyujinpy/SOLAR-Platypus-10.7B-v2' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/SOLAR-Platypus-10.7B-v2-Q6_K-GGUF\nThis model was converted to GGUF format from 'kyujinpy/SOLAR-Platypus-10.7B-v2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #text-generation #en #dataset-garage-bAInd/Open-Platypus #license-cc-by-nc-sa-4.0 #endpoints_compatible #region-us \n",
"# DavidAU/SOLAR-Platypus-10.7B-v2-Q6_K-GGUF\nThis model was converted to GGUF format from 'kyujinpy/SOLAR-Platypus-10.7B-v2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | transformers |
# DavidAU/SOLAR-10.7B-Instruct-v1.0-DPO-Q6_K-GGUF
This model was converted to GGUF format from [`Eric111/SOLAR-10.7B-Instruct-v1.0-DPO`](https://huggingface.co/Eric111/SOLAR-10.7B-Instruct-v1.0-DPO) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Eric111/SOLAR-10.7B-Instruct-v1.0-DPO) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/SOLAR-10.7B-Instruct-v1.0-DPO-Q6_K-GGUF --model solar-10.7b-instruct-v1.0-dpo.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/SOLAR-10.7B-Instruct-v1.0-DPO-Q6_K-GGUF --model solar-10.7b-instruct-v1.0-dpo.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m solar-10.7b-instruct-v1.0-dpo.Q6_K.gguf -n 128
```
| {"license": "apache-2.0", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"]} | DavidAU/SOLAR-10.7B-Instruct-v1.0-DPO-Q6_K-GGUF | null | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:56:15+00:00 | [] | [] | TAGS
#transformers #gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #endpoints_compatible #region-us
|
# DavidAU/SOLAR-10.7B-Instruct-v1.0-DPO-Q6_K-GGUF
This model was converted to GGUF format from 'Eric111/SOLAR-10.7B-Instruct-v1.0-DPO' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/SOLAR-10.7B-Instruct-v1.0-DPO-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/SOLAR-10.7B-Instruct-v1.0-DPO' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #endpoints_compatible #region-us \n",
"# DavidAU/SOLAR-10.7B-Instruct-v1.0-DPO-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/SOLAR-10.7B-Instruct-v1.0-DPO' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers |
# DavidAU/Yi-9B-Forest-DPO-v1.0-Q6_K-GGUF
This model was converted to GGUF format from [`abhishekchohan/Yi-9B-Forest-DPO-v1.0`](https://huggingface.co/abhishekchohan/Yi-9B-Forest-DPO-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/abhishekchohan/Yi-9B-Forest-DPO-v1.0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Yi-9B-Forest-DPO-v1.0-Q6_K-GGUF --model yi-9b-forest-dpo-v1.0.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Yi-9B-Forest-DPO-v1.0-Q6_K-GGUF --model yi-9b-forest-dpo-v1.0.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m yi-9b-forest-dpo-v1.0.Q6_K.gguf -n 128
```
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["Intel/orca_dpo_pairs", "nvidia/HelpSteer", "jondurbin/truthy-dpo-v0.1"], "pipeline_tag": "text-generation"} | DavidAU/Yi-9B-Forest-DPO-v1.0-Q6_K-GGUF | null | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:Intel/orca_dpo_pairs",
"dataset:nvidia/HelpSteer",
"dataset:jondurbin/truthy-dpo-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:57:45+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #llama-cpp #gguf-my-repo #text-generation #en #dataset-Intel/orca_dpo_pairs #dataset-nvidia/HelpSteer #dataset-jondurbin/truthy-dpo-v0.1 #license-apache-2.0 #endpoints_compatible #region-us
|
# DavidAU/Yi-9B-Forest-DPO-v1.0-Q6_K-GGUF
This model was converted to GGUF format from 'abhishekchohan/Yi-9B-Forest-DPO-v1.0' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Yi-9B-Forest-DPO-v1.0-Q6_K-GGUF\nThis model was converted to GGUF format from 'abhishekchohan/Yi-9B-Forest-DPO-v1.0' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #text-generation #en #dataset-Intel/orca_dpo_pairs #dataset-nvidia/HelpSteer #dataset-jondurbin/truthy-dpo-v0.1 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# DavidAU/Yi-9B-Forest-DPO-v1.0-Q6_K-GGUF\nThis model was converted to GGUF format from 'abhishekchohan/Yi-9B-Forest-DPO-v1.0' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers |
# DavidAU/mistral-7B-forest-dpo-Q6_K-GGUF
This model was converted to GGUF format from [`abhishekchohan/mistral-7B-forest-dpo`](https://huggingface.co/abhishekchohan/mistral-7B-forest-dpo) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/abhishekchohan/mistral-7B-forest-dpo) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/mistral-7B-forest-dpo-Q6_K-GGUF --model mistral-7b-forest-dpo.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/mistral-7B-forest-dpo-Q6_K-GGUF --model mistral-7b-forest-dpo.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-7b-forest-dpo.Q6_K.gguf -n 128
```
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["Intel/orca_dpo_pairs", "nvidia/HelpSteer", "jondurbin/truthy-dpo-v0.1"], "pipeline_tag": "text-generation"} | DavidAU/mistral-7B-forest-dpo-Q6_K-GGUF | null | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:Intel/orca_dpo_pairs",
"dataset:nvidia/HelpSteer",
"dataset:jondurbin/truthy-dpo-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T03:59:16+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #llama-cpp #gguf-my-repo #text-generation #en #dataset-Intel/orca_dpo_pairs #dataset-nvidia/HelpSteer #dataset-jondurbin/truthy-dpo-v0.1 #license-apache-2.0 #endpoints_compatible #region-us
|
# DavidAU/mistral-7B-forest-dpo-Q6_K-GGUF
This model was converted to GGUF format from 'abhishekchohan/mistral-7B-forest-dpo' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/mistral-7B-forest-dpo-Q6_K-GGUF\nThis model was converted to GGUF format from 'abhishekchohan/mistral-7B-forest-dpo' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #text-generation #en #dataset-Intel/orca_dpo_pairs #dataset-nvidia/HelpSteer #dataset-jondurbin/truthy-dpo-v0.1 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# DavidAU/mistral-7B-forest-dpo-Q6_K-GGUF\nThis model was converted to GGUF format from 'abhishekchohan/mistral-7B-forest-dpo' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation | transformers |
# New: mera-mix-4x7B GGUF
This is a repo for GGUF quants of mera-mix-4x7B. Currently it holds the FP16 and Q8_0 items only.
# Original: Model mera-mix-4x7B
This is a mixture of experts (MoE) model that is half as large (4 experts instead of 8) as the [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
while been comparable to it across different benchmarks. You can use it as a drop in replacement for your Mixtral-8x7B and get much faster inference.
mera-mix-4x7B achieves 76.37 on the openLLM eval v/s 72.7 by Mixtral-8x7B (as shown [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mistralai__Mixtral-8x7B-Instruct-v0.1)).
You can try the model with the [Mera Mixture Chat](https://huggingface.co/spaces/meraGPT/mera-mixture-chat).
<!--
## OpenLLM Eval
| Model | ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|Average|
|-------------------------------------------------------------|----:|--------:|----:|---------:|---------:|----:|------:|
|[mera-mix-4x7B](https://huggingface.co/meraGPT/mera-mix-4x7B)|72.01| 88.82|63.67| 77.45| 84.61|71.65| 76.37|
Raw eval results are available at this [gist](https://gist.github.com/codelion/78f88333230801c9bbaa6fc22078d820)
-->
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_meraGPT__mera-mix-4x7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.91|
|AI2 Reasoning Challenge (25-Shot)|72.95|
|HellaSwag (10-Shot) |89.17|
|MMLU (5-Shot) |64.44|
|TruthfulQA (0-shot) |77.17|
|Winogrande (5-shot) |85.64|
|GSM8k (5-shot) |66.11|
| {"license": "apache-2.0", "model-index": [{"name": "mera-mix-4x7B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 72.95, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=meraGPT/mera-mix-4x7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 89.17, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=meraGPT/mera-mix-4x7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 64.44, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=meraGPT/mera-mix-4x7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 77.17}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=meraGPT/mera-mix-4x7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 85.64, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=meraGPT/mera-mix-4x7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 66.11, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=meraGPT/mera-mix-4x7B", "name": "Open LLM Leaderboard"}}]}]} | oceansweep/mera-mix-4x7B-GGUF | null | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T03:59:18+00:00 | [] | [] | TAGS
#transformers #safetensors #mixtral #text-generation #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| New: mera-mix-4x7B GGUF
=======================
This is a repo for GGUF quants of mera-mix-4x7B. Currently it holds the FP16 and Q8\_0 items only.
Original: Model mera-mix-4x7B
=============================
This is a mixture of experts (MoE) model that is half as large (4 experts instead of 8) as the Mixtral-8x7B
while been comparable to it across different benchmarks. You can use it as a drop in replacement for your Mixtral-8x7B and get much faster inference.
mera-mix-4x7B achieves 76.37 on the openLLM eval v/s 72.7 by Mixtral-8x7B (as shown here).
You can try the model with the Mera Mixture Chat.
Open LLM Leaderboard Evaluation Results
=======================================
Detailed results can be found here
| [] | [
"TAGS\n#transformers #safetensors #mixtral #text-generation #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | null | yolov8 nano for face detection | {} | GDavila/yolov8facedetect | null | [
"region:us"
] | null | 2024-04-17T04:01:09+00:00 | [] | [] | TAGS
#region-us
| yolov8 nano for face detection | [] | [
"TAGS\n#region-us \n"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | vhs01/mistral-7b-dolly | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T04:02:11+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | PLatonG/openthaigpt-1.0.0-beta-7b-expert-recommendation-2.0 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2024-04-17T04:03:29+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #has_space #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #has_space #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["trl", "sft"]} | kai-oh/mistral-7b-ift-best-v2-hf | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T04:03:37+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
# DavidAU/caTUNABeagle-Q6_K-GGUF
This model was converted to GGUF format from [`Eric111/caTUNABeagle`](https://huggingface.co/Eric111/caTUNABeagle) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Eric111/caTUNABeagle) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/caTUNABeagle-Q6_K-GGUF --model catunabeagle.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/caTUNABeagle-Q6_K-GGUF --model catunabeagle.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m catunabeagle.Q6_K.gguf -n 128
```
| {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "fblgit/UNA-TheBeagle-7b-v1", "rishiraj/CatPPT-base", "llama-cpp", "gguf-my-repo"]} | DavidAU/caTUNABeagle-Q6_K-GGUF | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"fblgit/UNA-TheBeagle-7b-v1",
"rishiraj/CatPPT-base",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"region:us"
] | null | 2024-04-17T04:08:14+00:00 | [] | [] | TAGS
#gguf #merge #mergekit #lazymergekit #fblgit/UNA-TheBeagle-7b-v1 #rishiraj/CatPPT-base #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us
|
# DavidAU/caTUNABeagle-Q6_K-GGUF
This model was converted to GGUF format from 'Eric111/caTUNABeagle' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/caTUNABeagle-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/caTUNABeagle' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #merge #mergekit #lazymergekit #fblgit/UNA-TheBeagle-7b-v1 #rishiraj/CatPPT-base #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us \n",
"# DavidAU/caTUNABeagle-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/caTUNABeagle' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | null |
# DavidAU/Mayoroya-Q6_K-GGUF
This model was converted to GGUF format from [`Eric111/Mayoroya`](https://huggingface.co/Eric111/Mayoroya) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Eric111/Mayoroya) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Mayoroya-Q6_K-GGUF --model mayoroya.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Mayoroya-Q6_K-GGUF --model mayoroya.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mayoroya.Q6_K.gguf -n 128
```
| {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "Eric111/Mayo", "Eric111/Roya", "llama-cpp", "gguf-my-repo"]} | DavidAU/Mayoroya-Q6_K-GGUF | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Eric111/Mayo",
"Eric111/Roya",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"region:us"
] | null | 2024-04-17T04:09:19+00:00 | [] | [] | TAGS
#gguf #merge #mergekit #lazymergekit #Eric111/Mayo #Eric111/Roya #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us
|
# DavidAU/Mayoroya-Q6_K-GGUF
This model was converted to GGUF format from 'Eric111/Mayoroya' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Mayoroya-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/Mayoroya' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #merge #mergekit #lazymergekit #Eric111/Mayo #Eric111/Roya #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us \n",
"# DavidAU/Mayoroya-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/Mayoroya' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | null |
# DavidAU/Mayo-Q6_K-GGUF
This model was converted to GGUF format from [`Eric111/Mayo`](https://huggingface.co/Eric111/Mayo) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Eric111/Mayo) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Mayo-Q6_K-GGUF --model mayo.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Mayo-Q6_K-GGUF --model mayo.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mayo.Q6_K.gguf -n 128
```
| {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "mlabonne/NeuralBeagle14-7B", "openchat/openchat-3.5-0106", "llama-cpp", "gguf-my-repo"]} | DavidAU/Mayo-Q6_K-GGUF | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/NeuralBeagle14-7B",
"openchat/openchat-3.5-0106",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"region:us"
] | null | 2024-04-17T04:10:25+00:00 | [] | [] | TAGS
#gguf #merge #mergekit #lazymergekit #mlabonne/NeuralBeagle14-7B #openchat/openchat-3.5-0106 #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us
|
# DavidAU/Mayo-Q6_K-GGUF
This model was converted to GGUF format from 'Eric111/Mayo' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Mayo-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/Mayo' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #merge #mergekit #lazymergekit #mlabonne/NeuralBeagle14-7B #openchat/openchat-3.5-0106 #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us \n",
"# DavidAU/Mayo-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/Mayo' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | null |
# DavidAU/MarcoHermes-Q6_K-GGUF
This model was converted to GGUF format from [`Eric111/MarcoHermes`](https://huggingface.co/Eric111/MarcoHermes) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Eric111/MarcoHermes) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/MarcoHermes-Q6_K-GGUF --model marcohermes.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/MarcoHermes-Q6_K-GGUF --model marcohermes.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m marcohermes.Q6_K.gguf -n 128
```
| {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "AtAndDev/CapybaraMarcoroni-7B", "eren23/DistilHermes-2.5-Mistral-7B", "llama-cpp", "gguf-my-repo"]} | DavidAU/MarcoHermes-Q6_K-GGUF | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"AtAndDev/CapybaraMarcoroni-7B",
"eren23/DistilHermes-2.5-Mistral-7B",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"region:us"
] | null | 2024-04-17T04:11:43+00:00 | [] | [] | TAGS
#gguf #merge #mergekit #lazymergekit #AtAndDev/CapybaraMarcoroni-7B #eren23/DistilHermes-2.5-Mistral-7B #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us
|
# DavidAU/MarcoHermes-Q6_K-GGUF
This model was converted to GGUF format from 'Eric111/MarcoHermes' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/MarcoHermes-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/MarcoHermes' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #merge #mergekit #lazymergekit #AtAndDev/CapybaraMarcoroni-7B #eren23/DistilHermes-2.5-Mistral-7B #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us \n",
"# DavidAU/MarcoHermes-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/MarcoHermes' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K36me3-seqsight_65536_512_47M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7196
- F1 Score: 0.6313
- Accuracy: 0.6333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6749 | 14.29 | 200 | 0.6594 | 0.6084 | 0.6101 |
| 0.6255 | 28.57 | 400 | 0.6720 | 0.6072 | 0.6101 |
| 0.6015 | 42.86 | 600 | 0.6764 | 0.5969 | 0.6032 |
| 0.5788 | 57.14 | 800 | 0.6919 | 0.6095 | 0.6124 |
| 0.5616 | 71.43 | 1000 | 0.6995 | 0.6028 | 0.6104 |
| 0.5483 | 85.71 | 1200 | 0.6893 | 0.6170 | 0.6184 |
| 0.5386 | 100.0 | 1400 | 0.6886 | 0.6205 | 0.6207 |
| 0.5316 | 114.29 | 1600 | 0.6852 | 0.6175 | 0.6173 |
| 0.5234 | 128.57 | 1800 | 0.7024 | 0.6158 | 0.6155 |
| 0.518 | 142.86 | 2000 | 0.7165 | 0.6231 | 0.6247 |
| 0.5102 | 157.14 | 2200 | 0.7304 | 0.6167 | 0.6218 |
| 0.5036 | 171.43 | 2400 | 0.7301 | 0.6204 | 0.6259 |
| 0.4958 | 185.71 | 2600 | 0.7247 | 0.6267 | 0.6276 |
| 0.4915 | 200.0 | 2800 | 0.7179 | 0.6249 | 0.6259 |
| 0.4845 | 214.29 | 3000 | 0.7353 | 0.6344 | 0.6370 |
| 0.4783 | 228.57 | 3200 | 0.7213 | 0.6297 | 0.6296 |
| 0.4723 | 242.86 | 3400 | 0.7260 | 0.6342 | 0.6368 |
| 0.4663 | 257.14 | 3600 | 0.7465 | 0.6292 | 0.6327 |
| 0.4598 | 271.43 | 3800 | 0.7543 | 0.6333 | 0.6342 |
| 0.454 | 285.71 | 4000 | 0.7691 | 0.6337 | 0.6365 |
| 0.4461 | 300.0 | 4200 | 0.7411 | 0.6293 | 0.6293 |
| 0.442 | 314.29 | 4400 | 0.7787 | 0.6264 | 0.6279 |
| 0.4358 | 328.57 | 4600 | 0.7773 | 0.6284 | 0.6316 |
| 0.4322 | 342.86 | 4800 | 0.7750 | 0.6241 | 0.6287 |
| 0.4251 | 357.14 | 5000 | 0.7859 | 0.6260 | 0.6290 |
| 0.4213 | 371.43 | 5200 | 0.8191 | 0.6295 | 0.6319 |
| 0.4152 | 385.71 | 5400 | 0.7943 | 0.6249 | 0.6273 |
| 0.4106 | 400.0 | 5600 | 0.7933 | 0.6276 | 0.6293 |
| 0.4072 | 414.29 | 5800 | 0.8317 | 0.6235 | 0.6241 |
| 0.4027 | 428.57 | 6000 | 0.8035 | 0.6268 | 0.6276 |
| 0.3995 | 442.86 | 6200 | 0.8059 | 0.6245 | 0.6261 |
| 0.3955 | 457.14 | 6400 | 0.8212 | 0.6260 | 0.6273 |
| 0.3922 | 471.43 | 6600 | 0.8071 | 0.6238 | 0.6247 |
| 0.3894 | 485.71 | 6800 | 0.8409 | 0.6251 | 0.6276 |
| 0.3867 | 500.0 | 7000 | 0.8482 | 0.6189 | 0.6196 |
| 0.3851 | 514.29 | 7200 | 0.8274 | 0.6199 | 0.6210 |
| 0.383 | 528.57 | 7400 | 0.8286 | 0.6211 | 0.6236 |
| 0.3787 | 542.86 | 7600 | 0.8477 | 0.6235 | 0.6253 |
| 0.3789 | 557.14 | 7800 | 0.8196 | 0.6253 | 0.6259 |
| 0.3763 | 571.43 | 8000 | 0.8285 | 0.6200 | 0.6210 |
| 0.3744 | 585.71 | 8200 | 0.8376 | 0.6222 | 0.6239 |
| 0.3715 | 600.0 | 8400 | 0.8462 | 0.6231 | 0.6247 |
| 0.3677 | 614.29 | 8600 | 0.8558 | 0.6202 | 0.6218 |
| 0.3692 | 628.57 | 8800 | 0.8468 | 0.6226 | 0.6244 |
| 0.3691 | 642.86 | 9000 | 0.8440 | 0.6214 | 0.6230 |
| 0.3659 | 657.14 | 9200 | 0.8636 | 0.6238 | 0.6261 |
| 0.366 | 671.43 | 9400 | 0.8386 | 0.6216 | 0.6230 |
| 0.3659 | 685.71 | 9600 | 0.8443 | 0.6214 | 0.6227 |
| 0.3643 | 700.0 | 9800 | 0.8483 | 0.6233 | 0.6247 |
| 0.3642 | 714.29 | 10000 | 0.8486 | 0.6219 | 0.6233 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_65536_512_47M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_65536_512_47M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-04-17T04:11:55+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_EMP\_H3K36me3-seqsight\_65536\_512\_47M-L32\_all
=====================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_EMP\_H3K36me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7196
* F1 Score: 0.6313
* Accuracy: 0.6333
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | null |
## Exllama v2 Quantizations of CodeQwen1.5-7B
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.18">turboderp's ExLlamaV2 v0.0.18</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/Qwen/CodeQwen1.5-7B
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Available sizes
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/CodeQwen1.5-7B-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/CodeQwen1.5-7B-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/bartowski/CodeQwen1.5-7B-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/bartowski/CodeQwen1.5-7B-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/bartowski/CodeQwen1.5-7B-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/CodeQwen1.5-7B-exl2 CodeQwen1.5-7B-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch:
Linux:
```shell
huggingface-cli download bartowski/CodeQwen1.5-7B-exl2 --revision 6_5 --local-dir CodeQwen1.5-7B-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
huggingface-cli download bartowski/CodeQwen1.5-7B-exl2 --revision 6_5 --local-dir CodeQwen1.5-7B-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski | {"language": ["en"], "license": "other", "tags": ["pretrained"], "license_name": "tongyi-qianwen-research", "license_link": "https://huggingface.co/Qwen/CodeQwen1.5-7B/blob/main/LICENSE", "pipeline_tag": "text-generation", "quantized_by": "bartowski"} | bartowski/CodeQwen1.5-7B-exl2 | null | [
"pretrained",
"text-generation",
"en",
"license:other",
"region:us"
] | null | 2024-04-17T04:14:39+00:00 | [] | [
"en"
] | TAGS
#pretrained #text-generation #en #license-other #region-us
| Exllama v2 Quantizations of CodeQwen1.5-7B
------------------------------------------
Using <a href="URL ExLlamaV2 v0.0.18 for quantization.
**The "main" branch only contains the URL, download one of the other branches for the model (see below)**
Each branch contains an individual bits per weight, with the main one containing only the URL for further conversions.
Original model: URL
Prompt format
-------------
Available sizes
---------------
Download instructions
---------------------
With git:
With huggingface hub (credit to TheBloke for instructions):
To download a specific branch, use the '--revision' parameter. For example, to download the 6.5 bpw branch:
Linux:
Windows (which apparently doesn't like \_ in folders sometimes?):
Want to support my work? Visit my ko-fi page here: URL
| [] | [
"TAGS\n#pretrained #text-generation #en #license-other #region-us \n"
] |
null | null | A simple Numpy python script for CMD console to build a 2-input 3-levels of neurons (2 input-level neurons) (two hidden layer neurons) (one output-level neuron) model to illustrate how an
AI model can perform the XOR operation.
This script initializes a tiny neural network with random weights and trains it using the backpropagation algorithm. After training,
the network should be able to correctly perform the XOR operation on the 2 inputs. The key to solving the XOR problem with
a neural network is to have a non-linear activation function, like the sigmoid function used here, and a hidden layer that can create
the necessary non-linear decision boundaries.
This script illustrates how an AI model can perform the logical XOR operation, using a minimal simple neural network with a single hidden layer
containing two neurons.
Adaptive learning rate is used to refine the loss.
The script produces a working XOR having a loss under 1% for all inputs.
But, the output is never exactly 1.0 or 0.0 as would be a true boolean XOR gate.
---
license: mit
---
| {} | MartialTerran/2-input-XOR_by_3_level_NN_with_Sigmoid | null | [
"region:us"
] | null | 2024-04-17T04:14:48+00:00 | [] | [] | TAGS
#region-us
| A simple Numpy python script for CMD console to build a 2-input 3-levels of neurons (2 input-level neurons) (two hidden layer neurons) (one output-level neuron) model to illustrate how an
AI model can perform the XOR operation.
This script initializes a tiny neural network with random weights and trains it using the backpropagation algorithm. After training,
the network should be able to correctly perform the XOR operation on the 2 inputs. The key to solving the XOR problem with
a neural network is to have a non-linear activation function, like the sigmoid function used here, and a hidden layer that can create
the necessary non-linear decision boundaries.
This script illustrates how an AI model can perform the logical XOR operation, using a minimal simple neural network with a single hidden layer
containing two neurons.
Adaptive learning rate is used to refine the loss.
The script produces a working XOR having a loss under 1% for all inputs.
But, the output is never exactly 1.0 or 0.0 as would be a true boolean XOR gate.
---
license: mit
---
| [] | [
"TAGS\n#region-us \n"
] |
null | transformers |
# DavidAU/UltraCatunaMayo-DPO-Q6_K-GGUF
This model was converted to GGUF format from [`Eric111/UltraCatunaMayo-DPO`](https://huggingface.co/Eric111/UltraCatunaMayo-DPO) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Eric111/UltraCatunaMayo-DPO) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/UltraCatunaMayo-DPO-Q6_K-GGUF --model ultracatunamayo-dpo.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/UltraCatunaMayo-DPO-Q6_K-GGUF --model ultracatunamayo-dpo.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m ultracatunamayo-dpo.Q6_K.gguf -n 128
```
| {"license": "apache-2.0", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"]} | DavidAU/UltraCatunaMayo-DPO-Q6_K-GGUF | null | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T04:15:37+00:00 | [] | [] | TAGS
#transformers #gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #endpoints_compatible #region-us
|
# DavidAU/UltraCatunaMayo-DPO-Q6_K-GGUF
This model was converted to GGUF format from 'Eric111/UltraCatunaMayo-DPO' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/UltraCatunaMayo-DPO-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/UltraCatunaMayo-DPO' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #endpoints_compatible #region-us \n",
"# DavidAU/UltraCatunaMayo-DPO-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/UltraCatunaMayo-DPO' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | adediu25/implicit-bert-all-no-lora | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T04:17:30+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5-large-10K-summarization
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "t5-large", "model-index": [{"name": "T5-large-10K-summarization", "results": []}]} | yatharth97/T5-large-10K-summarization | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T04:18:58+00:00 | [] | [] | TAGS
#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-large #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# T5-large-10K-summarization
This model is a fine-tuned version of t5-large on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Tokenizers 0.15.2
| [
"# T5-large-10K-summarization\n\nThis model is a fine-tuned version of t5-large on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-large #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# T5-large-10K-summarization\n\nThis model is a fine-tuned version of t5-large on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Tokenizers 0.15.2"
] |
null | null |
# DavidAU/UltraCatunaMayo-Q6_K-GGUF
This model was converted to GGUF format from [`Eric111/UltraCatunaMayo`](https://huggingface.co/Eric111/UltraCatunaMayo) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Eric111/UltraCatunaMayo) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/UltraCatunaMayo-Q6_K-GGUF --model ultracatunamayo.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/UltraCatunaMayo-Q6_K-GGUF --model ultracatunamayo.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m ultracatunamayo.Q6_K.gguf -n 128
```
| {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "mlabonne/UltraMerge-7B", "Eric111/CatunaMayo", "llama-cpp", "gguf-my-repo"]} | DavidAU/UltraCatunaMayo-Q6_K-GGUF | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/UltraMerge-7B",
"Eric111/CatunaMayo",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"region:us"
] | null | 2024-04-17T04:20:23+00:00 | [] | [] | TAGS
#gguf #merge #mergekit #lazymergekit #mlabonne/UltraMerge-7B #Eric111/CatunaMayo #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us
|
# DavidAU/UltraCatunaMayo-Q6_K-GGUF
This model was converted to GGUF format from 'Eric111/UltraCatunaMayo' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/UltraCatunaMayo-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/UltraCatunaMayo' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #merge #mergekit #lazymergekit #mlabonne/UltraMerge-7B #Eric111/CatunaMayo #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us \n",
"# DavidAU/UltraCatunaMayo-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/UltraCatunaMayo' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# imdb-spoiler-distilbertOrigDatasetSampledOnly
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1065
- Accuracy: 0.6849
- Recall: 0.6615
- Precision: 0.6939
- F1: 0.6773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | Precision | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.5154 | 0.12 | 500 | 0.6116 | 0.6853 | 0.8147 | 0.6471 | 0.7213 |
| 0.4616 | 0.25 | 1000 | 0.6352 | 0.6946 | 0.7063 | 0.6902 | 0.6981 |
| 0.4422 | 0.38 | 1500 | 0.7289 | 0.69 | 0.7265 | 0.6771 | 0.7009 |
| 0.4228 | 0.5 | 2000 | 0.7194 | 0.6957 | 0.6575 | 0.7120 | 0.6836 |
| 0.4464 | 0.62 | 2500 | 0.6603 | 0.6926 | 0.6757 | 0.6994 | 0.6873 |
| 0.4142 | 0.75 | 3000 | 0.6885 | 0.6813 | 0.726 | 0.6664 | 0.6949 |
| 0.4273 | 0.88 | 3500 | 0.6638 | 0.69 | 0.7328 | 0.6750 | 0.7027 |
| 0.5912 | 1.0 | 4000 | 0.5640 | 0.7025 | 0.7113 | 0.6990 | 0.7051 |
| 0.4345 | 1.12 | 4500 | 0.7228 | 0.6949 | 0.6435 | 0.7172 | 0.6784 |
| 0.4336 | 1.25 | 5000 | 0.6732 | 0.6911 | 0.5915 | 0.7387 | 0.6569 |
| 0.4289 | 1.38 | 5500 | 0.6717 | 0.694 | 0.662 | 0.7073 | 0.6839 |
| 0.4219 | 1.5 | 6000 | 0.6760 | 0.6834 | 0.688 | 0.6817 | 0.6848 |
| 0.4175 | 1.62 | 6500 | 0.7393 | 0.6897 | 0.6757 | 0.6952 | 0.6853 |
| 0.4171 | 1.75 | 7000 | 0.7033 | 0.6797 | 0.597 | 0.7154 | 0.6509 |
| 0.4345 | 1.88 | 7500 | 0.6748 | 0.6874 | 0.6505 | 0.7023 | 0.6754 |
| 0.4063 | 2.0 | 8000 | 0.7267 | 0.6913 | 0.6098 | 0.7285 | 0.6639 |
| 0.3102 | 2.12 | 8500 | 1.0369 | 0.684 | 0.6308 | 0.7059 | 0.6662 |
| 0.3338 | 2.25 | 9000 | 1.0451 | 0.6846 | 0.6773 | 0.6874 | 0.6823 |
| 0.3257 | 2.38 | 9500 | 1.0364 | 0.682 | 0.6322 | 0.7021 | 0.6654 |
| 0.3235 | 2.5 | 10000 | 1.0224 | 0.6823 | 0.6315 | 0.7028 | 0.6653 |
| 0.3171 | 2.62 | 10500 | 1.1165 | 0.6859 | 0.6368 | 0.7061 | 0.6696 |
| 0.3266 | 2.75 | 11000 | 1.1109 | 0.6834 | 0.6315 | 0.7046 | 0.6661 |
| 0.2914 | 2.88 | 11500 | 1.1022 | 0.6829 | 0.6488 | 0.6963 | 0.6717 |
| 0.3041 | 3.0 | 12000 | 1.1065 | 0.6849 | 0.6615 | 0.6939 | 0.6773 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "recall", "precision", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "imdb-spoiler-distilbertOrigDatasetSampledOnly", "results": []}]} | Zritze/imdb-spoiler-distilbertOrigDatasetSampledOnly | null | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T04:21:01+00:00 | [] | [] | TAGS
#transformers #safetensors #roberta #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| imdb-spoiler-distilbertOrigDatasetSampledOnly
=============================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1065
* Accuracy: 0.6849
* Recall: 0.6615
* Precision: 0.6939
* F1: 0.6773
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.2+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #roberta #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | acram/gemma-pii-detection-Instruct-Finetune-test | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-17T04:27:20+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | transformers |
# DavidAU/CatunaLaserPi-DPO-Q6_K-GGUF
This model was converted to GGUF format from [`Eric111/CatunaLaserPi-DPO`](https://huggingface.co/Eric111/CatunaLaserPi-DPO) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Eric111/CatunaLaserPi-DPO) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/CatunaLaserPi-DPO-Q6_K-GGUF --model catunalaserpi-dpo.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/CatunaLaserPi-DPO-Q6_K-GGUF --model catunalaserpi-dpo.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m catunalaserpi-dpo.Q6_K.gguf -n 128
```
| {"license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"]} | DavidAU/CatunaLaserPi-DPO-Q6_K-GGUF | null | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T04:29:37+00:00 | [] | [] | TAGS
#transformers #gguf #llama-cpp #gguf-my-repo #license-cc-by-nc-4.0 #endpoints_compatible #region-us
|
# DavidAU/CatunaLaserPi-DPO-Q6_K-GGUF
This model was converted to GGUF format from 'Eric111/CatunaLaserPi-DPO' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/CatunaLaserPi-DPO-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/CatunaLaserPi-DPO' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# DavidAU/CatunaLaserPi-DPO-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/CatunaLaserPi-DPO' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | transformers |
# DavidAU/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1-Q6_K-GGUF
This model was converted to GGUF format from [`Eric111/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1`](https://huggingface.co/Eric111/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Eric111/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1-Q6_K-GGUF --model mistral-7b-instruct_v0.2_una-thebeagle-7b-v1.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1-Q6_K-GGUF --model mistral-7b-instruct_v0.2_una-thebeagle-7b-v1.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-7b-instruct_v0.2_una-thebeagle-7b-v1.Q6_K.gguf -n 128
```
| {"license": "cc-by-nc-nd-4.0", "library_name": "transformers", "tags": ["mergekit", "merge", "llama-cpp", "gguf-my-repo"], "base_model": ["mistralai/Mistral-7B-Instruct-v0.2", "fblgit/UNA-TheBeagle-7b-v1"]} | DavidAU/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1-Q6_K-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:fblgit/UNA-TheBeagle-7b-v1",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T04:30:42+00:00 | [] | [] | TAGS
#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-mistralai/Mistral-7B-Instruct-v0.2 #base_model-fblgit/UNA-TheBeagle-7b-v1 #license-cc-by-nc-nd-4.0 #endpoints_compatible #region-us
|
# DavidAU/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1-Q6_K-GGUF
This model was converted to GGUF format from 'Eric111/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #mergekit #merge #llama-cpp #gguf-my-repo #base_model-mistralai/Mistral-7B-Instruct-v0.2 #base_model-fblgit/UNA-TheBeagle-7b-v1 #license-cc-by-nc-nd-4.0 #endpoints_compatible #region-us \n",
"# DavidAU/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/Mistral-7B-Instruct_v0.2_UNA-TheBeagle-7b-v1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | null |
# DavidAU/MistInst-v0.2_ochat-3.5-0106_dpo-binarized-NeuralTrix-7B-Q6_K-GGUF
This model was converted to GGUF format from [`Eric111/MistInst-v0.2_ochat-3.5-0106_dpo-binarized-NeuralTrix-7B`](https://huggingface.co/Eric111/MistInst-v0.2_ochat-3.5-0106_dpo-binarized-NeuralTrix-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Eric111/MistInst-v0.2_ochat-3.5-0106_dpo-binarized-NeuralTrix-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/MistInst-v0.2_ochat-3.5-0106_dpo-binarized-NeuralTrix-7B-Q6_K-GGUF --model mistinst-v0.2_ochat-3.5-0106_dpo-binarized-neuraltrix-7b.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/MistInst-v0.2_ochat-3.5-0106_dpo-binarized-NeuralTrix-7B-Q6_K-GGUF --model mistinst-v0.2_ochat-3.5-0106_dpo-binarized-neuraltrix-7b.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistinst-v0.2_ochat-3.5-0106_dpo-binarized-neuraltrix-7b.Q6_K.gguf -n 128
```
| {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "Eric111/Mistral-7B-Instruct-v0.2_openchat-3.5-0106", "eren23/dpo-binarized-NeuralTrix-7B", "llama-cpp", "gguf-my-repo"]} | DavidAU/MistInst-v0.2_ochat-3.5-0106_dpo-binarized-NeuralTrix-7B-Q6_K-GGUF | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Eric111/Mistral-7B-Instruct-v0.2_openchat-3.5-0106",
"eren23/dpo-binarized-NeuralTrix-7B",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"region:us"
] | null | 2024-04-17T04:32:31+00:00 | [] | [] | TAGS
#gguf #merge #mergekit #lazymergekit #Eric111/Mistral-7B-Instruct-v0.2_openchat-3.5-0106 #eren23/dpo-binarized-NeuralTrix-7B #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us
|
# DavidAU/MistInst-v0.2_ochat-3.5-0106_dpo-binarized-NeuralTrix-7B-Q6_K-GGUF
This model was converted to GGUF format from 'Eric111/MistInst-v0.2_ochat-3.5-0106_dpo-binarized-NeuralTrix-7B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/MistInst-v0.2_ochat-3.5-0106_dpo-binarized-NeuralTrix-7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/MistInst-v0.2_ochat-3.5-0106_dpo-binarized-NeuralTrix-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #merge #mergekit #lazymergekit #Eric111/Mistral-7B-Instruct-v0.2_openchat-3.5-0106 #eren23/dpo-binarized-NeuralTrix-7B #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us \n",
"# DavidAU/MistInst-v0.2_ochat-3.5-0106_dpo-binarized-NeuralTrix-7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/MistInst-v0.2_ochat-3.5-0106_dpo-binarized-NeuralTrix-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | null |
# DavidAU/Mistral-7B-Instruct-v0.2_openchat-3.5-0106-Q6_K-GGUF
This model was converted to GGUF format from [`Eric111/Mistral-7B-Instruct-v0.2_openchat-3.5-0106`](https://huggingface.co/Eric111/Mistral-7B-Instruct-v0.2_openchat-3.5-0106) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Eric111/Mistral-7B-Instruct-v0.2_openchat-3.5-0106) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Mistral-7B-Instruct-v0.2_openchat-3.5-0106-Q6_K-GGUF --model mistral-7b-instruct-v0.2_openchat-3.5-0106.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Mistral-7B-Instruct-v0.2_openchat-3.5-0106-Q6_K-GGUF --model mistral-7b-instruct-v0.2_openchat-3.5-0106.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-7b-instruct-v0.2_openchat-3.5-0106.Q6_K.gguf -n 128
```
| {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "openchat/openchat-3.5-0106", "llama-cpp", "gguf-my-repo"]} | DavidAU/Mistral-7B-Instruct-v0.2_openchat-3.5-0106-Q6_K-GGUF | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"openchat/openchat-3.5-0106",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"region:us"
] | null | 2024-04-17T04:33:38+00:00 | [] | [] | TAGS
#gguf #merge #mergekit #lazymergekit #mistralai/Mistral-7B-Instruct-v0.2 #openchat/openchat-3.5-0106 #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us
|
# DavidAU/Mistral-7B-Instruct-v0.2_openchat-3.5-0106-Q6_K-GGUF
This model was converted to GGUF format from 'Eric111/Mistral-7B-Instruct-v0.2_openchat-3.5-0106' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Mistral-7B-Instruct-v0.2_openchat-3.5-0106-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/Mistral-7B-Instruct-v0.2_openchat-3.5-0106' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #merge #mergekit #lazymergekit #mistralai/Mistral-7B-Instruct-v0.2 #openchat/openchat-3.5-0106 #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us \n",
"# DavidAU/Mistral-7B-Instruct-v0.2_openchat-3.5-0106-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/Mistral-7B-Instruct-v0.2_openchat-3.5-0106' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | null |
# DavidAU/CatunaLaserPi-Q6_K-GGUF
This model was converted to GGUF format from [`Eric111/CatunaLaserPi`](https://huggingface.co/Eric111/CatunaLaserPi) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Eric111/CatunaLaserPi) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/CatunaLaserPi-Q6_K-GGUF --model catunalaserpi.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/CatunaLaserPi-Q6_K-GGUF --model catunalaserpi.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m catunalaserpi.Q6_K.gguf -n 128
```
| {"license": "cc-by-nc-4.0", "tags": ["merge", "mergekit", "lazymergekit", "Eric111/caTUNABeagle", "BryanSwk/LaserPipe-7B-SLERP", "llama-cpp", "gguf-my-repo"]} | DavidAU/CatunaLaserPi-Q6_K-GGUF | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Eric111/caTUNABeagle",
"BryanSwk/LaserPipe-7B-SLERP",
"llama-cpp",
"gguf-my-repo",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-04-17T04:34:43+00:00 | [] | [] | TAGS
#gguf #merge #mergekit #lazymergekit #Eric111/caTUNABeagle #BryanSwk/LaserPipe-7B-SLERP #llama-cpp #gguf-my-repo #license-cc-by-nc-4.0 #region-us
|
# DavidAU/CatunaLaserPi-Q6_K-GGUF
This model was converted to GGUF format from 'Eric111/CatunaLaserPi' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/CatunaLaserPi-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/CatunaLaserPi' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #merge #mergekit #lazymergekit #Eric111/caTUNABeagle #BryanSwk/LaserPipe-7B-SLERP #llama-cpp #gguf-my-repo #license-cc-by-nc-4.0 #region-us \n",
"# DavidAU/CatunaLaserPi-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/CatunaLaserPi' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | null |
# DavidAU/Snorkel-Mistral-PairRM-DPO-openchat-3.5-0106-laser-Q6_K-GGUF
This model was converted to GGUF format from [`Eric111/Snorkel-Mistral-PairRM-DPO-openchat-3.5-0106-laser`](https://huggingface.co/Eric111/Snorkel-Mistral-PairRM-DPO-openchat-3.5-0106-laser) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Eric111/Snorkel-Mistral-PairRM-DPO-openchat-3.5-0106-laser) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Snorkel-Mistral-PairRM-DPO-openchat-3.5-0106-laser-Q6_K-GGUF --model snorkel-mistral-pairrm-dpo-openchat-3.5-0106-laser.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Snorkel-Mistral-PairRM-DPO-openchat-3.5-0106-laser-Q6_K-GGUF --model snorkel-mistral-pairrm-dpo-openchat-3.5-0106-laser.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m snorkel-mistral-pairrm-dpo-openchat-3.5-0106-laser.Q6_K.gguf -n 128
```
| {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "snorkelai/Snorkel-Mistral-PairRM-DPO", "cognitivecomputations/openchat-3.5-0106-laser", "llama-cpp", "gguf-my-repo"]} | DavidAU/Snorkel-Mistral-PairRM-DPO-openchat-3.5-0106-laser-Q6_K-GGUF | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"snorkelai/Snorkel-Mistral-PairRM-DPO",
"cognitivecomputations/openchat-3.5-0106-laser",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"region:us"
] | null | 2024-04-17T04:36:05+00:00 | [] | [] | TAGS
#gguf #merge #mergekit #lazymergekit #snorkelai/Snorkel-Mistral-PairRM-DPO #cognitivecomputations/openchat-3.5-0106-laser #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us
|
# DavidAU/Snorkel-Mistral-PairRM-DPO-openchat-3.5-0106-laser-Q6_K-GGUF
This model was converted to GGUF format from 'Eric111/Snorkel-Mistral-PairRM-DPO-openchat-3.5-0106-laser' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Snorkel-Mistral-PairRM-DPO-openchat-3.5-0106-laser-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/Snorkel-Mistral-PairRM-DPO-openchat-3.5-0106-laser' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #merge #mergekit #lazymergekit #snorkelai/Snorkel-Mistral-PairRM-DPO #cognitivecomputations/openchat-3.5-0106-laser #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us \n",
"# DavidAU/Snorkel-Mistral-PairRM-DPO-openchat-3.5-0106-laser-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/Snorkel-Mistral-PairRM-DPO-openchat-3.5-0106-laser' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | transformers |
# DavidAU/Yarn-Mistral-7b-128k-DPO-Q6_K-GGUF
This model was converted to GGUF format from [`Eric111/Yarn-Mistral-7b-128k-DPO`](https://huggingface.co/Eric111/Yarn-Mistral-7b-128k-DPO) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Eric111/Yarn-Mistral-7b-128k-DPO) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Yarn-Mistral-7b-128k-DPO-Q6_K-GGUF --model yarn-mistral-7b-128k-dpo.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Yarn-Mistral-7b-128k-DPO-Q6_K-GGUF --model yarn-mistral-7b-128k-dpo.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m yarn-mistral-7b-128k-dpo.Q6_K.gguf -n 128
```
| {"license": "apache-2.0", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"]} | DavidAU/Yarn-Mistral-7b-128k-DPO-Q6_K-GGUF | null | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T04:37:18+00:00 | [] | [] | TAGS
#transformers #gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #endpoints_compatible #region-us
|
# DavidAU/Yarn-Mistral-7b-128k-DPO-Q6_K-GGUF
This model was converted to GGUF format from 'Eric111/Yarn-Mistral-7b-128k-DPO' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/Yarn-Mistral-7b-128k-DPO-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/Yarn-Mistral-7b-128k-DPO' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #endpoints_compatible #region-us \n",
"# DavidAU/Yarn-Mistral-7b-128k-DPO-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/Yarn-Mistral-7b-128k-DPO' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | transformers |
# DavidAU/openchat-3.5-0106-128k-DPO-Q6_K-GGUF
This model was converted to GGUF format from [`Eric111/openchat-3.5-0106-128k-DPO`](https://huggingface.co/Eric111/openchat-3.5-0106-128k-DPO) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Eric111/openchat-3.5-0106-128k-DPO) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/openchat-3.5-0106-128k-DPO-Q6_K-GGUF --model openchat-3.5-0106-128k-dpo.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/openchat-3.5-0106-128k-DPO-Q6_K-GGUF --model openchat-3.5-0106-128k-dpo.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m openchat-3.5-0106-128k-dpo.Q6_K.gguf -n 128
```
| {"license": "apache-2.0", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"]} | DavidAU/openchat-3.5-0106-128k-DPO-Q6_K-GGUF | null | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T04:40:29+00:00 | [] | [] | TAGS
#transformers #gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #endpoints_compatible #region-us
|
# DavidAU/openchat-3.5-0106-128k-DPO-Q6_K-GGUF
This model was converted to GGUF format from 'Eric111/openchat-3.5-0106-128k-DPO' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/openchat-3.5-0106-128k-DPO-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/openchat-3.5-0106-128k-DPO' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #endpoints_compatible #region-us \n",
"# DavidAU/openchat-3.5-0106-128k-DPO-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/openchat-3.5-0106-128k-DPO' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | null |
# DavidAU/CatunaMayo-Q6_K-GGUF
This model was converted to GGUF format from [`Eric111/CatunaMayo`](https://huggingface.co/Eric111/CatunaMayo) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Eric111/CatunaMayo) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/CatunaMayo-Q6_K-GGUF --model catunamayo.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/CatunaMayo-Q6_K-GGUF --model catunamayo.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m catunamayo.Q6_K.gguf -n 128
```
| {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "Eric111/caTUNABeagle", "Eric111/AlphaMayo", "llama-cpp", "gguf-my-repo"]} | DavidAU/CatunaMayo-Q6_K-GGUF | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Eric111/caTUNABeagle",
"Eric111/AlphaMayo",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"region:us"
] | null | 2024-04-17T04:41:35+00:00 | [] | [] | TAGS
#gguf #merge #mergekit #lazymergekit #Eric111/caTUNABeagle #Eric111/AlphaMayo #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us
|
# DavidAU/CatunaMayo-Q6_K-GGUF
This model was converted to GGUF format from 'Eric111/CatunaMayo' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
| [
"# DavidAU/CatunaMayo-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/CatunaMayo' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] | [
"TAGS\n#gguf #merge #mergekit #lazymergekit #Eric111/caTUNABeagle #Eric111/AlphaMayo #llama-cpp #gguf-my-repo #license-apache-2.0 #region-us \n",
"# DavidAU/CatunaMayo-Q6_K-GGUF\nThis model was converted to GGUF format from 'Eric111/CatunaMayo' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null | transformers |
# Uploaded model
- **Developed by:** codesagar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | codesagar/prompt-guard-reasoning-v12 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-17T04:42:47+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: codesagar
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_0-seqsight_65536_512_47M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6470
- F1 Score: 0.5617
- Accuracy: 0.5617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6443 | 50.0 | 200 | 0.7420 | 0.5861 | 0.5864 |
| 0.4985 | 100.0 | 400 | 0.8800 | 0.5566 | 0.5568 |
| 0.391 | 150.0 | 600 | 0.9882 | 0.5716 | 0.5728 |
| 0.3329 | 200.0 | 800 | 1.0680 | 0.5608 | 0.5630 |
| 0.3003 | 250.0 | 1000 | 1.0885 | 0.5851 | 0.5852 |
| 0.2844 | 300.0 | 1200 | 1.1914 | 0.5911 | 0.5914 |
| 0.2693 | 350.0 | 1400 | 1.1644 | 0.5863 | 0.5889 |
| 0.2598 | 400.0 | 1600 | 1.1619 | 0.5876 | 0.5889 |
| 0.2487 | 450.0 | 1800 | 1.2034 | 0.5877 | 0.5877 |
| 0.2383 | 500.0 | 2000 | 1.2792 | 0.6049 | 0.6049 |
| 0.2317 | 550.0 | 2200 | 1.2357 | 0.6024 | 0.6025 |
| 0.2208 | 600.0 | 2400 | 1.3531 | 0.5919 | 0.5951 |
| 0.2116 | 650.0 | 2600 | 1.3232 | 0.5924 | 0.5938 |
| 0.2025 | 700.0 | 2800 | 1.3744 | 0.6062 | 0.6062 |
| 0.1981 | 750.0 | 3000 | 1.3268 | 0.5911 | 0.5914 |
| 0.1893 | 800.0 | 3200 | 1.3673 | 0.5923 | 0.5926 |
| 0.1832 | 850.0 | 3400 | 1.3710 | 0.5985 | 0.5988 |
| 0.1769 | 900.0 | 3600 | 1.3232 | 0.5940 | 0.5951 |
| 0.1679 | 950.0 | 3800 | 1.4335 | 0.6012 | 0.6025 |
| 0.1613 | 1000.0 | 4000 | 1.4186 | 0.5959 | 0.5963 |
| 0.156 | 1050.0 | 4200 | 1.4299 | 0.5984 | 0.5988 |
| 0.1517 | 1100.0 | 4400 | 1.4396 | 0.5938 | 0.5951 |
| 0.1471 | 1150.0 | 4600 | 1.4829 | 0.6043 | 0.6049 |
| 0.1395 | 1200.0 | 4800 | 1.5019 | 0.6094 | 0.6099 |
| 0.1361 | 1250.0 | 5000 | 1.3642 | 0.6110 | 0.6111 |
| 0.1329 | 1300.0 | 5200 | 1.4592 | 0.5941 | 0.5951 |
| 0.1288 | 1350.0 | 5400 | 1.5022 | 0.6094 | 0.6099 |
| 0.1249 | 1400.0 | 5600 | 1.4542 | 0.6024 | 0.6025 |
| 0.1176 | 1450.0 | 5800 | 1.5842 | 0.6012 | 0.6012 |
| 0.1148 | 1500.0 | 6000 | 1.5441 | 0.6048 | 0.6049 |
| 0.1137 | 1550.0 | 6200 | 1.5358 | 0.6099 | 0.6099 |
| 0.1109 | 1600.0 | 6400 | 1.5550 | 0.6071 | 0.6074 |
| 0.1053 | 1650.0 | 6600 | 1.5509 | 0.6087 | 0.6086 |
| 0.1027 | 1700.0 | 6800 | 1.5171 | 0.6046 | 0.6049 |
| 0.1 | 1750.0 | 7000 | 1.5449 | 0.6012 | 0.6012 |
| 0.0976 | 1800.0 | 7200 | 1.5314 | 0.6038 | 0.6037 |
| 0.0948 | 1850.0 | 7400 | 1.5012 | 0.6207 | 0.6210 |
| 0.0936 | 1900.0 | 7600 | 1.6573 | 0.6063 | 0.6074 |
| 0.0907 | 1950.0 | 7800 | 1.5893 | 0.6010 | 0.6025 |
| 0.091 | 2000.0 | 8000 | 1.4911 | 0.6108 | 0.6111 |
| 0.0894 | 2050.0 | 8200 | 1.6058 | 0.6073 | 0.6074 |
| 0.0872 | 2100.0 | 8400 | 1.6656 | 0.6055 | 0.6062 |
| 0.0866 | 2150.0 | 8600 | 1.6268 | 0.6104 | 0.6111 |
| 0.0833 | 2200.0 | 8800 | 1.6478 | 0.6001 | 0.6 |
| 0.084 | 2250.0 | 9000 | 1.5717 | 0.6040 | 0.6049 |
| 0.0839 | 2300.0 | 9200 | 1.6142 | 0.6046 | 0.6049 |
| 0.0807 | 2350.0 | 9400 | 1.6460 | 0.6049 | 0.6049 |
| 0.0809 | 2400.0 | 9600 | 1.6330 | 0.6037 | 0.6037 |
| 0.0796 | 2450.0 | 9800 | 1.6165 | 0.6098 | 0.6099 |
| 0.08 | 2500.0 | 10000 | 1.6272 | 0.6086 | 0.6086 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_0-seqsight_65536_512_47M-L32_all", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_0-seqsight_65536_512_47M-L32_all | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-04-17T04:43:46+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us
| GUE\_mouse\_0-seqsight\_65536\_512\_47M-L32\_all
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_65536\_512\_47M on the mahdibaghbanzadeh/GUE\_mouse\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.6470
* F1 Score: 0.5617
* Accuracy: 0.5617
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_65536_512_47M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft | ## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
| {"library_name": "peft"} | Britania/AthelticRoutinesLLM | null | [
"peft",
"region:us"
] | null | 2024-04-17T04:43:55+00:00 | [] | [] | TAGS
#peft #region-us
| ## Training procedure
The following 'bitsandbytes' quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
| [
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.5.0"
] | [
"TAGS\n#peft #region-us \n",
"## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: bfloat16",
"### Framework versions\n\n\n- PEFT 0.5.0"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.