pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | null |
<img src="https://i.imgur.com/P68dXux.png" width="400"/>
# Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-iMat-GGUF
Quantized from fp32 with love.
* Quantizations made possible using mixtral-8x7b.imatrix file from [this](https://huggingface.co/datasets/ikawrakow/imatrix-from-wiki-train) repo (special thanks to [ikawrakow](https://huggingface.co/ikawrakow)).
For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747)
<i>All quants are verified working prior to uploading to repo for your safety and convenience. </i>
Importance matrix quantizations are a work in progress, IQ3 and above is recommended for best results.
<b>Tip:</b> Pick a size that can fit in your GPU while still allowing some room for context for best speed. You may need to pad this further depending on if you are running image gen or TTS as well.
Original model card can be found [here](https://huggingface.co/Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss)
|
{"license": "apache-2.0", "tags": ["merge", "gguf", "iMat"]}
|
InferenceIllusionist/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-iMat-GGUF
| null |
[
"gguf",
"merge",
"iMat",
"license:apache-2.0",
"region:us"
] | null |
2024-04-16T01:49:55+00:00
|
[] |
[] |
TAGS
#gguf #merge #iMat #license-apache-2.0 #region-us
|
<img src="https://i.URL width="400"/>
# Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-iMat-GGUF
Quantized from fp32 with love.
* Quantizations made possible using mixtral-8x7b.imatrix file from this repo (special thanks to ikawrakow).
For a brief rundown of iMatrix quant performance please see this PR
<i>All quants are verified working prior to uploading to repo for your safety and convenience. </i>
Importance matrix quantizations are a work in progress, IQ3 and above is recommended for best results.
<b>Tip:</b> Pick a size that can fit in your GPU while still allowing some room for context for best speed. You may need to pad this further depending on if you are running image gen or TTS as well.
Original model card can be found here
|
[
"# Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-iMat-GGUF\n\nQuantized from fp32 with love.\n* Quantizations made possible using mixtral-8x7b.imatrix file from this repo (special thanks to ikawrakow).\n\nFor a brief rundown of iMatrix quant performance please see this PR\n\n<i>All quants are verified working prior to uploading to repo for your safety and convenience. </i>\n\nImportance matrix quantizations are a work in progress, IQ3 and above is recommended for best results. \n\n<b>Tip:</b> Pick a size that can fit in your GPU while still allowing some room for context for best speed. You may need to pad this further depending on if you are running image gen or TTS as well.\n\nOriginal model card can be found here"
] |
[
"TAGS\n#gguf #merge #iMat #license-apache-2.0 #region-us \n",
"# Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-iMat-GGUF\n\nQuantized from fp32 with love.\n* Quantizations made possible using mixtral-8x7b.imatrix file from this repo (special thanks to ikawrakow).\n\nFor a brief rundown of iMatrix quant performance please see this PR\n\n<i>All quants are verified working prior to uploading to repo for your safety and convenience. </i>\n\nImportance matrix quantizations are a work in progress, IQ3 and above is recommended for best results. \n\n<b>Tip:</b> Pick a size that can fit in your GPU while still allowing some room for context for best speed. You may need to pad this further depending on if you are running image gen or TTS as well.\n\nOriginal model card can be found here"
] |
text-generation
| null |
# DavidAU/winter-garden-7b-delta-Q6_K-GGUF
This model was converted to GGUF format from [`maldv/winter-garden-7b-delta`](https://huggingface.co/maldv/winter-garden-7b-delta) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/maldv/winter-garden-7b-delta) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/winter-garden-7b-delta-Q6_K-GGUF --model winter-garden-7b-delta.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/winter-garden-7b-delta-Q6_K-GGUF --model winter-garden-7b-delta.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m winter-garden-7b-delta.Q6_K.gguf -n 128
```
|
{"license": "cc-by-nc-4.0", "tags": ["merge", "conversational", "multi-task", "llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation"}
|
DavidAU/winter-garden-7b-delta-Q6_K-GGUF
| null |
[
"gguf",
"merge",
"conversational",
"multi-task",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2024-04-16T01:50:13+00:00
|
[] |
[] |
TAGS
#gguf #merge #conversational #multi-task #llama-cpp #gguf-my-repo #text-generation #license-cc-by-nc-4.0 #region-us
|
# DavidAU/winter-garden-7b-delta-Q6_K-GGUF
This model was converted to GGUF format from 'maldv/winter-garden-7b-delta' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# DavidAU/winter-garden-7b-delta-Q6_K-GGUF\nThis model was converted to GGUF format from 'maldv/winter-garden-7b-delta' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#gguf #merge #conversational #multi-task #llama-cpp #gguf-my-repo #text-generation #license-cc-by-nc-4.0 #region-us \n",
"# DavidAU/winter-garden-7b-delta-Q6_K-GGUF\nThis model was converted to GGUF format from 'maldv/winter-garden-7b-delta' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [GamblerOnTrain/danke30a](https://huggingface.co/GamblerOnTrain/danke30a)
* [GamblerOnTrain/danke20a](https://huggingface.co/GamblerOnTrain/danke20a)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: GamblerOnTrain/danke20a
layer_range: [0, 24]
- model: GamblerOnTrain/danke30a
layer_range: [0, 24]
merge_method: slerp
base_model: GamblerOnTrain/danke30a
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["GamblerOnTrain/danke30a", "GamblerOnTrain/danke20a"]}
|
Sumail/Ame5
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:GamblerOnTrain/danke30a",
"base_model:GamblerOnTrain/danke20a",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T01:50:18+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #stablelm #text-generation #mergekit #merge #conversational #base_model-GamblerOnTrain/danke30a #base_model-GamblerOnTrain/danke20a #autotrain_compatible #endpoints_compatible #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* GamblerOnTrain/danke30a
* GamblerOnTrain/danke20a
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* GamblerOnTrain/danke30a\n* GamblerOnTrain/danke20a",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #stablelm #text-generation #mergekit #merge #conversational #base_model-GamblerOnTrain/danke30a #base_model-GamblerOnTrain/danke20a #autotrain_compatible #endpoints_compatible #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* GamblerOnTrain/danke30a\n* GamblerOnTrain/danke20a",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
null |
transformers
|
# DavidAU/electric-sheep-7b-alpha-Q6_K-GGUF
This model was converted to GGUF format from [`maldv/electric-sheep-7b-alpha`](https://huggingface.co/maldv/electric-sheep-7b-alpha) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/maldv/electric-sheep-7b-alpha) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/electric-sheep-7b-alpha-Q6_K-GGUF --model electric-sheep-7b-alpha.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/electric-sheep-7b-alpha-Q6_K-GGUF --model electric-sheep-7b-alpha.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m electric-sheep-7b-alpha.Q6_K.gguf -n 128
```
|
{"language": ["en"], "license": "cc-by-nc-4.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "llama-cpp", "gguf-my-repo"], "datasets": ["maldv/cyberpunk", "microsoft/orca-math-word-problems-200k", "Weyaxi/sci-datasets", "maldv/conversation-cixot"], "base_model": "maldv/winter-garden-7b-alpha"}
|
DavidAU/electric-sheep-7b-alpha-Q6_K-GGUF
| null |
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:maldv/cyberpunk",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:Weyaxi/sci-datasets",
"dataset:maldv/conversation-cixot",
"base_model:maldv/winter-garden-7b-alpha",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T01:53:16+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #text-generation-inference #unsloth #mistral #llama-cpp #gguf-my-repo #en #dataset-maldv/cyberpunk #dataset-microsoft/orca-math-word-problems-200k #dataset-Weyaxi/sci-datasets #dataset-maldv/conversation-cixot #base_model-maldv/winter-garden-7b-alpha #license-cc-by-nc-4.0 #endpoints_compatible #region-us
|
# DavidAU/electric-sheep-7b-alpha-Q6_K-GGUF
This model was converted to GGUF format from 'maldv/electric-sheep-7b-alpha' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# DavidAU/electric-sheep-7b-alpha-Q6_K-GGUF\nThis model was converted to GGUF format from 'maldv/electric-sheep-7b-alpha' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#transformers #gguf #text-generation-inference #unsloth #mistral #llama-cpp #gguf-my-repo #en #dataset-maldv/cyberpunk #dataset-microsoft/orca-math-word-problems-200k #dataset-Weyaxi/sci-datasets #dataset-maldv/conversation-cixot #base_model-maldv/winter-garden-7b-alpha #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# DavidAU/electric-sheep-7b-alpha-Q6_K-GGUF\nThis model was converted to GGUF format from 'maldv/electric-sheep-7b-alpha' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/uncensorie/chronob-1.4-lin-70b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/chronob-1.4-lin-70b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.Q2_K.gguf) | Q2_K | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.IQ3_XS.gguf) | IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.Q3_K_S.gguf) | Q3_K_S | 30.0 | |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.IQ3_M.gguf) | IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.Q3_K_L.gguf) | Q3_K_L | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.IQ4_XS.gguf) | IQ4_XS | 37.3 | |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.Q5_K_S.gguf) | Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.Q5_K_M.gguf) | Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/chronob-1.4-lin-70b-GGUF/resolve/main/chronob-1.4-lin-70b.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "license": "llama2", "library_name": "transformers", "base_model": "uncensorie/chronob-1.4-lin-70b", "quantized_by": "mradermacher"}
|
mradermacher/chronob-1.4-lin-70b-GGUF
| null |
[
"transformers",
"gguf",
"en",
"base_model:uncensorie/chronob-1.4-lin-70b",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T01:54:51+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #en #base_model-uncensorie/chronob-1.4-lin-70b #license-llama2 #endpoints_compatible #region-us
|
About
-----
static quants of URL
weighted/imatrix quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #gguf #en #base_model-uncensorie/chronob-1.4-lin-70b #license-llama2 #endpoints_compatible #region-us \n"
] |
text-to-image
|
diffusers
|
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - Acopa/deep_fashion_ft_sdxl
These are LoRA adaption weights for stabilityai/sdxl-turbo. The weights were fine-tuned on the lirus18/deepfashion_with_captions dataset. You can find some example images in the following.
LoRA for the text encoder was enabled: None.
Special VAE used for training: None.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "diffusers-training", "lora"], "base_model": "stabilityai/sdxl-turbo", "inference": true}
|
Acopa/deep_fashion_ft_sdxl
| null |
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/sdxl-turbo",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | null |
2024-04-16T01:55:04+00:00
|
[] |
[] |
TAGS
#diffusers #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #diffusers-training #lora #base_model-stabilityai/sdxl-turbo #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us
|
# LoRA text2image fine-tuning - Acopa/deep_fashion_ft_sdxl
These are LoRA adaption weights for stabilityai/sdxl-turbo. The weights were fine-tuned on the lirus18/deepfashion_with_captions dataset. You can find some example images in the following.
LoRA for the text encoder was enabled: None.
Special VAE used for training: None.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
[
"# LoRA text2image fine-tuning - Acopa/deep_fashion_ft_sdxl\n\nThese are LoRA adaption weights for stabilityai/sdxl-turbo. The weights were fine-tuned on the lirus18/deepfashion_with_captions dataset. You can find some example images in the following. \n\n\n\nLoRA for the text encoder was enabled: None.\n\nSpecial VAE used for training: None.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
[
"TAGS\n#diffusers #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #diffusers-training #lora #base_model-stabilityai/sdxl-turbo #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us \n",
"# LoRA text2image fine-tuning - Acopa/deep_fashion_ft_sdxl\n\nThese are LoRA adaption weights for stabilityai/sdxl-turbo. The weights were fine-tuned on the lirus18/deepfashion_with_captions dataset. You can find some example images in the following. \n\n\n\nLoRA for the text encoder was enabled: None.\n\nSpecial VAE used for training: None.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/KaeriJenti/kaori-34b-v4
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/kaori-34b-v4-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-IQ1_S.gguf) | i1-IQ1_S | 7.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-IQ1_M.gguf) | i1-IQ1_M | 8.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-IQ2_S.gguf) | i1-IQ2_S | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-IQ2_M.gguf) | i1-IQ2_M | 11.9 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-Q2_K.gguf) | i1-Q2_K | 12.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-IQ3_S.gguf) | i1-IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-IQ3_M.gguf) | i1-IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-IQ4_XS.gguf) | i1-IQ4_XS | 18.6 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-Q4_0.gguf) | i1-Q4_0 | 19.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-Q4_K_S.gguf) | i1-Q4_K_S | 19.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-Q5_K_S.gguf) | i1-Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/kaori-34b-v4-i1-GGUF/resolve/main/kaori-34b-v4.i1-Q6_K.gguf) | i1-Q6_K | 28.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "license": "llama2", "library_name": "transformers", "base_model": "KaeriJenti/kaori-34b-v4", "quantized_by": "mradermacher"}
|
mradermacher/kaori-34b-v4-i1-GGUF
| null |
[
"transformers",
"gguf",
"en",
"base_model:KaeriJenti/kaori-34b-v4",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T01:55:21+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #en #base_model-KaeriJenti/kaori-34b-v4 #license-llama2 #endpoints_compatible #region-us
|
About
-----
weighted/imatrix quants of URL
static quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #gguf #en #base_model-KaeriJenti/kaori-34b-v4 #license-llama2 #endpoints_compatible #region-us \n"
] |
text-generation
| null |
# DavidAU/eleusis-7b-alpha-Q6_K-GGUF
This model was converted to GGUF format from [`maldv/eleusis-7b-alpha`](https://huggingface.co/maldv/eleusis-7b-alpha) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/maldv/eleusis-7b-alpha) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/eleusis-7b-alpha-Q6_K-GGUF --model eleusis-7b-alpha.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/eleusis-7b-alpha-Q6_K-GGUF --model eleusis-7b-alpha.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m eleusis-7b-alpha.Q6_K.gguf -n 128
```
|
{"license": "cc-by-nc-4.0", "tags": ["merge", "conversational", "multi-task", "llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation"}
|
DavidAU/eleusis-7b-alpha-Q6_K-GGUF
| null |
[
"gguf",
"merge",
"conversational",
"multi-task",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2024-04-16T01:55:49+00:00
|
[] |
[] |
TAGS
#gguf #merge #conversational #multi-task #llama-cpp #gguf-my-repo #text-generation #license-cc-by-nc-4.0 #region-us
|
# DavidAU/eleusis-7b-alpha-Q6_K-GGUF
This model was converted to GGUF format from 'maldv/eleusis-7b-alpha' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# DavidAU/eleusis-7b-alpha-Q6_K-GGUF\nThis model was converted to GGUF format from 'maldv/eleusis-7b-alpha' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#gguf #merge #conversational #multi-task #llama-cpp #gguf-my-repo #text-generation #license-cc-by-nc-4.0 #region-us \n",
"# DavidAU/eleusis-7b-alpha-Q6_K-GGUF\nThis model was converted to GGUF format from 'maldv/eleusis-7b-alpha' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation
| null |
# DavidAU/winter-garden-7b-beta-Q6_K-GGUF
This model was converted to GGUF format from [`maldv/winter-garden-7b-beta`](https://huggingface.co/maldv/winter-garden-7b-beta) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/maldv/winter-garden-7b-beta) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/winter-garden-7b-beta-Q6_K-GGUF --model winter-garden-7b-beta.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/winter-garden-7b-beta-Q6_K-GGUF --model winter-garden-7b-beta.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m winter-garden-7b-beta.Q6_K.gguf -n 128
```
|
{"license": "cc-by-nc-4.0", "tags": ["merge", "conversational", "multi-task", "llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation"}
|
DavidAU/winter-garden-7b-beta-Q6_K-GGUF
| null |
[
"gguf",
"merge",
"conversational",
"multi-task",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"license:cc-by-nc-4.0",
"region:us"
] | null |
2024-04-16T01:57:01+00:00
|
[] |
[] |
TAGS
#gguf #merge #conversational #multi-task #llama-cpp #gguf-my-repo #text-generation #license-cc-by-nc-4.0 #region-us
|
# DavidAU/winter-garden-7b-beta-Q6_K-GGUF
This model was converted to GGUF format from 'maldv/winter-garden-7b-beta' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# DavidAU/winter-garden-7b-beta-Q6_K-GGUF\nThis model was converted to GGUF format from 'maldv/winter-garden-7b-beta' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#gguf #merge #conversational #multi-task #llama-cpp #gguf-my-repo #text-generation #license-cc-by-nc-4.0 #region-us \n",
"# DavidAU/winter-garden-7b-beta-Q6_K-GGUF\nThis model was converted to GGUF format from 'maldv/winter-garden-7b-beta' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation
| null |
# DavidAU/winter-garden-7b-alpha-Q6_K-GGUF
This model was converted to GGUF format from [`maldv/winter-garden-7b-alpha`](https://huggingface.co/maldv/winter-garden-7b-alpha) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/maldv/winter-garden-7b-alpha) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/winter-garden-7b-alpha-Q6_K-GGUF --model winter-garden-7b-alpha.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/winter-garden-7b-alpha-Q6_K-GGUF --model winter-garden-7b-alpha.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m winter-garden-7b-alpha.Q6_K.gguf -n 128
```
|
{"license": "cc-by-nc-4.0", "tags": ["merge", "conversational", "multi-task", "llama-cpp", "gguf-my-repo"], "base_model": ["paulml/OmniBeagleSquaredMBX-v3-7B", "ZySec-AI/ZySec-7B-v1", "liminerity/Omningotex-7b-slerp", "localfultonextractor/Erosumika-7B", "KatyTheCutie/LemonadeRP-4.5.3", "cgato/Thespis-Krangled-7b", "CorticalStack/pastiche-crown-clown-7b-dare", "snorkelai/Snorkel-Mistral-PairRM-DPO", "MTSAIR/multi_verse_model"], "pipeline_tag": "text-generation", "model-index": [{"name": "winter-garden-7b-alpha - \"Smart Assistant\"", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 65.19, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=maldv/winter-garden-7b-alpha", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 85.36, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=maldv/winter-garden-7b-alpha", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 65.2, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=maldv/winter-garden-7b-alpha", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 50.94}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=maldv/winter-garden-7b-alpha", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 80.35, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=maldv/winter-garden-7b-alpha", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 54.44, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=maldv/winter-garden-7b-alpha", "name": "Open LLM Leaderboard"}}]}]}
|
DavidAU/winter-garden-7b-alpha-Q6_K-GGUF
| null |
[
"gguf",
"merge",
"conversational",
"multi-task",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:paulml/OmniBeagleSquaredMBX-v3-7B",
"base_model:ZySec-AI/ZySec-7B-v1",
"base_model:liminerity/Omningotex-7b-slerp",
"base_model:localfultonextractor/Erosumika-7B",
"base_model:KatyTheCutie/LemonadeRP-4.5.3",
"base_model:cgato/Thespis-Krangled-7b",
"base_model:CorticalStack/pastiche-crown-clown-7b-dare",
"base_model:snorkelai/Snorkel-Mistral-PairRM-DPO",
"base_model:MTSAIR/multi_verse_model",
"license:cc-by-nc-4.0",
"model-index",
"region:us"
] | null |
2024-04-16T01:58:14+00:00
|
[] |
[] |
TAGS
#gguf #merge #conversational #multi-task #llama-cpp #gguf-my-repo #text-generation #base_model-paulml/OmniBeagleSquaredMBX-v3-7B #base_model-ZySec-AI/ZySec-7B-v1 #base_model-liminerity/Omningotex-7b-slerp #base_model-localfultonextractor/Erosumika-7B #base_model-KatyTheCutie/LemonadeRP-4.5.3 #base_model-cgato/Thespis-Krangled-7b #base_model-CorticalStack/pastiche-crown-clown-7b-dare #base_model-snorkelai/Snorkel-Mistral-PairRM-DPO #base_model-MTSAIR/multi_verse_model #license-cc-by-nc-4.0 #model-index #region-us
|
# DavidAU/winter-garden-7b-alpha-Q6_K-GGUF
This model was converted to GGUF format from 'maldv/winter-garden-7b-alpha' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# DavidAU/winter-garden-7b-alpha-Q6_K-GGUF\nThis model was converted to GGUF format from 'maldv/winter-garden-7b-alpha' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#gguf #merge #conversational #multi-task #llama-cpp #gguf-my-repo #text-generation #base_model-paulml/OmniBeagleSquaredMBX-v3-7B #base_model-ZySec-AI/ZySec-7B-v1 #base_model-liminerity/Omningotex-7b-slerp #base_model-localfultonextractor/Erosumika-7B #base_model-KatyTheCutie/LemonadeRP-4.5.3 #base_model-cgato/Thespis-Krangled-7b #base_model-CorticalStack/pastiche-crown-clown-7b-dare #base_model-snorkelai/Snorkel-Mistral-PairRM-DPO #base_model-MTSAIR/multi_verse_model #license-cc-by-nc-4.0 #model-index #region-us \n",
"# DavidAU/winter-garden-7b-alpha-Q6_K-GGUF\nThis model was converted to GGUF format from 'maldv/winter-garden-7b-alpha' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Julesb5/gemma-2b-it-med1
| null |
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T01:59:20+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me1-seqsight_8192_512_17M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_17M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_17M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5397
- F1 Score: 0.7425
- Accuracy: 0.7456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5924 | 15.38 | 200 | 0.5765 | 0.7095 | 0.7169 |
| 0.5447 | 30.77 | 400 | 0.5595 | 0.7272 | 0.7317 |
| 0.5243 | 46.15 | 600 | 0.5546 | 0.7360 | 0.7383 |
| 0.5061 | 61.54 | 800 | 0.5666 | 0.7332 | 0.7367 |
| 0.4885 | 76.92 | 1000 | 0.5751 | 0.7292 | 0.7333 |
| 0.4723 | 92.31 | 1200 | 0.5968 | 0.7170 | 0.7225 |
| 0.4545 | 107.69 | 1400 | 0.6003 | 0.7222 | 0.7251 |
| 0.4374 | 123.08 | 1600 | 0.6200 | 0.7223 | 0.7263 |
| 0.4216 | 138.46 | 1800 | 0.6097 | 0.7221 | 0.7241 |
| 0.4064 | 153.85 | 2000 | 0.6316 | 0.7191 | 0.7222 |
| 0.3899 | 169.23 | 2200 | 0.6422 | 0.7154 | 0.7175 |
| 0.3777 | 184.62 | 2400 | 0.6741 | 0.7143 | 0.7175 |
| 0.3644 | 200.0 | 2600 | 0.6804 | 0.7108 | 0.7134 |
| 0.3524 | 215.38 | 2800 | 0.6910 | 0.7127 | 0.7153 |
| 0.3402 | 230.77 | 3000 | 0.7031 | 0.7087 | 0.7109 |
| 0.3277 | 246.15 | 3200 | 0.7168 | 0.7111 | 0.7140 |
| 0.3188 | 261.54 | 3400 | 0.7450 | 0.7001 | 0.7020 |
| 0.3094 | 276.92 | 3600 | 0.7274 | 0.7097 | 0.7128 |
| 0.299 | 292.31 | 3800 | 0.7410 | 0.7084 | 0.7096 |
| 0.2914 | 307.69 | 4000 | 0.7541 | 0.7069 | 0.7090 |
| 0.2859 | 323.08 | 4200 | 0.7659 | 0.7014 | 0.7030 |
| 0.2766 | 338.46 | 4400 | 0.7880 | 0.7050 | 0.7071 |
| 0.2702 | 353.85 | 4600 | 0.8006 | 0.7118 | 0.7140 |
| 0.2633 | 369.23 | 4800 | 0.7953 | 0.7060 | 0.7080 |
| 0.2563 | 384.62 | 5000 | 0.8192 | 0.7059 | 0.7068 |
| 0.2515 | 400.0 | 5200 | 0.8218 | 0.7132 | 0.7146 |
| 0.2466 | 415.38 | 5400 | 0.8431 | 0.7082 | 0.7102 |
| 0.2397 | 430.77 | 5600 | 0.8489 | 0.7094 | 0.7121 |
| 0.2352 | 446.15 | 5800 | 0.8485 | 0.7072 | 0.7080 |
| 0.2321 | 461.54 | 6000 | 0.8497 | 0.7110 | 0.7128 |
| 0.2261 | 476.92 | 6200 | 0.8692 | 0.7106 | 0.7124 |
| 0.2241 | 492.31 | 6400 | 0.8781 | 0.7136 | 0.7162 |
| 0.2203 | 507.69 | 6600 | 0.8860 | 0.7100 | 0.7121 |
| 0.2166 | 523.08 | 6800 | 0.8801 | 0.7108 | 0.7131 |
| 0.2145 | 538.46 | 7000 | 0.8952 | 0.7115 | 0.7137 |
| 0.2103 | 553.85 | 7200 | 0.9009 | 0.7077 | 0.7093 |
| 0.2076 | 569.23 | 7400 | 0.8995 | 0.7091 | 0.7115 |
| 0.2065 | 584.62 | 7600 | 0.9109 | 0.7100 | 0.7118 |
| 0.2028 | 600.0 | 7800 | 0.9102 | 0.7113 | 0.7131 |
| 0.2009 | 615.38 | 8000 | 0.9021 | 0.7127 | 0.7143 |
| 0.1986 | 630.77 | 8200 | 0.9254 | 0.7107 | 0.7124 |
| 0.198 | 646.15 | 8400 | 0.9228 | 0.7133 | 0.7153 |
| 0.1968 | 661.54 | 8600 | 0.9219 | 0.7110 | 0.7128 |
| 0.195 | 676.92 | 8800 | 0.9277 | 0.7129 | 0.7146 |
| 0.1939 | 692.31 | 9000 | 0.9298 | 0.7108 | 0.7124 |
| 0.1909 | 707.69 | 9200 | 0.9369 | 0.7093 | 0.7112 |
| 0.1906 | 723.08 | 9400 | 0.9346 | 0.7117 | 0.7134 |
| 0.1906 | 738.46 | 9600 | 0.9297 | 0.7100 | 0.7115 |
| 0.1903 | 753.85 | 9800 | 0.9329 | 0.7102 | 0.7118 |
| 0.1909 | 769.23 | 10000 | 0.9358 | 0.7129 | 0.7146 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_17M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_8192_512_17M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_8192_512_17M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_17M",
"region:us"
] | null |
2024-04-16T02:00:32+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us
|
GUE\_EMP\_H3K4me1-seqsight\_8192\_512\_17M-L32\_all
===================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_17M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5397
* F1 Score: 0.7425
* Accuracy: 0.7456
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K36me3-seqsight_8192_512_17M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_17M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_17M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5172
- F1 Score: 0.7815
- Accuracy: 0.7844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5368 | 14.29 | 200 | 0.5165 | 0.7527 | 0.7563 |
| 0.4834 | 28.57 | 400 | 0.5074 | 0.7560 | 0.7606 |
| 0.4622 | 42.86 | 600 | 0.5005 | 0.7663 | 0.7698 |
| 0.4446 | 57.14 | 800 | 0.5116 | 0.7634 | 0.7683 |
| 0.4298 | 71.43 | 1000 | 0.4947 | 0.7683 | 0.7718 |
| 0.4136 | 85.71 | 1200 | 0.5073 | 0.7726 | 0.7758 |
| 0.399 | 100.0 | 1400 | 0.5128 | 0.7747 | 0.7772 |
| 0.3837 | 114.29 | 1600 | 0.5215 | 0.7716 | 0.7749 |
| 0.3675 | 128.57 | 1800 | 0.5357 | 0.7720 | 0.7752 |
| 0.3539 | 142.86 | 2000 | 0.5555 | 0.7594 | 0.7638 |
| 0.3385 | 157.14 | 2200 | 0.5878 | 0.7621 | 0.7663 |
| 0.3253 | 171.43 | 2400 | 0.5893 | 0.7580 | 0.7626 |
| 0.3131 | 185.71 | 2600 | 0.5699 | 0.7733 | 0.7747 |
| 0.301 | 200.0 | 2800 | 0.6070 | 0.7677 | 0.7712 |
| 0.2903 | 214.29 | 3000 | 0.6074 | 0.7698 | 0.7729 |
| 0.2799 | 228.57 | 3200 | 0.6222 | 0.7654 | 0.7686 |
| 0.2719 | 242.86 | 3400 | 0.6439 | 0.7672 | 0.7712 |
| 0.2618 | 257.14 | 3600 | 0.6609 | 0.7579 | 0.7620 |
| 0.2547 | 271.43 | 3800 | 0.6716 | 0.7653 | 0.7686 |
| 0.246 | 285.71 | 4000 | 0.6827 | 0.7637 | 0.7675 |
| 0.2392 | 300.0 | 4200 | 0.6764 | 0.7604 | 0.7635 |
| 0.2331 | 314.29 | 4400 | 0.6800 | 0.7630 | 0.7658 |
| 0.225 | 328.57 | 4600 | 0.7434 | 0.7578 | 0.7626 |
| 0.2199 | 342.86 | 4800 | 0.7195 | 0.7590 | 0.7626 |
| 0.2167 | 357.14 | 5000 | 0.7293 | 0.7643 | 0.7672 |
| 0.2101 | 371.43 | 5200 | 0.7444 | 0.7616 | 0.7646 |
| 0.2045 | 385.71 | 5400 | 0.7655 | 0.7600 | 0.7640 |
| 0.2009 | 400.0 | 5600 | 0.7503 | 0.7639 | 0.7666 |
| 0.1966 | 414.29 | 5800 | 0.7710 | 0.7623 | 0.7655 |
| 0.193 | 428.57 | 6000 | 0.7775 | 0.7654 | 0.7689 |
| 0.1885 | 442.86 | 6200 | 0.8072 | 0.7639 | 0.7675 |
| 0.1861 | 457.14 | 6400 | 0.7887 | 0.7633 | 0.7663 |
| 0.1816 | 471.43 | 6600 | 0.8130 | 0.7614 | 0.7649 |
| 0.1805 | 485.71 | 6800 | 0.8069 | 0.7635 | 0.7663 |
| 0.1766 | 500.0 | 7000 | 0.8184 | 0.7588 | 0.7623 |
| 0.1746 | 514.29 | 7200 | 0.8099 | 0.7643 | 0.7669 |
| 0.1726 | 528.57 | 7400 | 0.8225 | 0.7615 | 0.7646 |
| 0.1683 | 542.86 | 7600 | 0.8084 | 0.7707 | 0.7724 |
| 0.1678 | 557.14 | 7800 | 0.8372 | 0.7641 | 0.7672 |
| 0.1658 | 571.43 | 8000 | 0.8513 | 0.7618 | 0.7652 |
| 0.1638 | 585.71 | 8200 | 0.8478 | 0.7635 | 0.7663 |
| 0.1616 | 600.0 | 8400 | 0.8361 | 0.7677 | 0.7701 |
| 0.1612 | 614.29 | 8600 | 0.8467 | 0.7666 | 0.7689 |
| 0.1594 | 628.57 | 8800 | 0.8436 | 0.7660 | 0.7686 |
| 0.1582 | 642.86 | 9000 | 0.8547 | 0.7638 | 0.7666 |
| 0.1573 | 657.14 | 9200 | 0.8667 | 0.7574 | 0.7609 |
| 0.1565 | 671.43 | 9400 | 0.8574 | 0.7643 | 0.7669 |
| 0.1548 | 685.71 | 9600 | 0.8626 | 0.7644 | 0.7672 |
| 0.1562 | 700.0 | 9800 | 0.8597 | 0.7647 | 0.7675 |
| 0.1562 | 714.29 | 10000 | 0.8591 | 0.7647 | 0.7675 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_17M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_8192_512_17M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_8192_512_17M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_17M",
"region:us"
] | null |
2024-04-16T02:01:04+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us
|
GUE\_EMP\_H3K36me3-seqsight\_8192\_512\_17M-L32\_all
====================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_17M on the mahdibaghbanzadeh/GUE\_EMP\_H3K36me3 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5172
* F1 Score: 0.7815
* Accuracy: 0.7844
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
reinforcement-learning
| null |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
{"tags": ["Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "REINFORCE-Pixelcopter-PLE-v0", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Pixelcopter-PLE-v0", "type": "Pixelcopter-PLE-v0"}, "metrics": [{"type": "mean_reward", "value": "-2.70 +/- 0.46", "name": "mean_reward", "verified": false}]}]}]}
|
Rudolph314/REINFORCE-Pixelcopter-PLE-v0
| null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | null |
2024-04-16T02:02:53+00:00
|
[] |
[] |
TAGS
#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
|
# Reinforce Agent playing Pixelcopter-PLE-v0
This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
|
[
"# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] |
[
"TAGS\n#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n",
"# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL"
] |
token-classification
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
AwesomeREK/concept-extraction-xlnet-early-stopping
| null |
[
"transformers",
"safetensors",
"xlnet",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T02:05:55+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #xlnet #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #xlnet #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
reinforcement-learning
|
stable-baselines3
|
# **DQN** Agent playing **ALE/Pacman-v5**
This is a trained model of a **DQN** agent playing **ALE/Pacman-v5**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env ALE/Pacman-v5 -orga ledmands -f logs/
python -m rl_zoo3.enjoy --algo dqn --env ALE/Pacman-v5 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env ALE/Pacman-v5 -orga ledmands -f logs/
python -m rl_zoo3.enjoy --algo dqn --env ALE/Pacman-v5 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env ALE/Pacman-v5 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env ALE/Pacman-v5 -f logs/ -orga ledmands
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 66000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gamma', 0.999),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 2500000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'frameskip': 3, 'render_mode': 'rgb_array'}
```
|
{"library_name": "stable-baselines3", "tags": ["ALE/Pacman-v5", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "DQN", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "ALE/Pacman-v5", "type": "ALE/Pacman-v5"}, "metrics": [{"type": "mean_reward", "value": "252.30 +/- 137.05", "name": "mean_reward", "verified": false}]}]}]}
|
ledmands/dqn_Pacman-v5_gamma999_v1
| null |
[
"stable-baselines3",
"ALE/Pacman-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null |
2024-04-16T02:05:56+00:00
|
[] |
[] |
TAGS
#stable-baselines3 #ALE/Pacman-v5 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# DQN Agent playing ALE/Pacman-v5
This is a trained model of a DQN agent playing ALE/Pacman-v5
using the stable-baselines3 library
and the RL Zoo.
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: URL
SB3: URL
SB3 Contrib: URL
Install the RL Zoo (with SB3 and SB3-Contrib):
If you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:
## Training (with the RL Zoo)
## Hyperparameters
# Environment Arguments
|
[
"# DQN Agent playing ALE/Pacman-v5\nThis is a trained model of a DQN agent playing ALE/Pacman-v5\nusing the stable-baselines3 library\nand the RL Zoo.\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.",
"## Usage (with SB3 RL Zoo)\n\nRL Zoo: URL\nSB3: URL\nSB3 Contrib: URL\n\nInstall the RL Zoo (with SB3 and SB3-Contrib):\n\n\n\n\nIf you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:",
"## Training (with the RL Zoo)",
"## Hyperparameters",
"# Environment Arguments"
] |
[
"TAGS\n#stable-baselines3 #ALE/Pacman-v5 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# DQN Agent playing ALE/Pacman-v5\nThis is a trained model of a DQN agent playing ALE/Pacman-v5\nusing the stable-baselines3 library\nand the RL Zoo.\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.",
"## Usage (with SB3 RL Zoo)\n\nRL Zoo: URL\nSB3: URL\nSB3 Contrib: URL\n\nInstall the RL Zoo (with SB3 and SB3-Contrib):\n\n\n\n\nIf you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:",
"## Training (with the RL Zoo)",
"## Hyperparameters",
"# Environment Arguments"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
kanxxyc/llama_B_wikija_global_step40
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T02:06:57+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_0-seqsight_8192_512_17M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_17M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_17M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1244
- F1 Score: 0.7210
- Accuracy: 0.7210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 1536
- eval_batch_size: 1536
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5809 | 40.0 | 200 | 0.5463 | 0.7169 | 0.7173 |
| 0.489 | 80.0 | 400 | 0.5336 | 0.7299 | 0.7309 |
| 0.4451 | 120.0 | 600 | 0.5305 | 0.7365 | 0.7370 |
| 0.4031 | 160.0 | 800 | 0.5548 | 0.7555 | 0.7556 |
| 0.3637 | 200.0 | 1000 | 0.5753 | 0.7614 | 0.7617 |
| 0.3279 | 240.0 | 1200 | 0.6178 | 0.7605 | 0.7605 |
| 0.2949 | 280.0 | 1400 | 0.6433 | 0.7589 | 0.7605 |
| 0.2599 | 320.0 | 1600 | 0.6848 | 0.7601 | 0.7605 |
| 0.2326 | 360.0 | 1800 | 0.7133 | 0.7528 | 0.7531 |
| 0.2039 | 400.0 | 2000 | 0.7815 | 0.7518 | 0.7519 |
| 0.1811 | 440.0 | 2200 | 0.8221 | 0.7618 | 0.7617 |
| 0.1621 | 480.0 | 2400 | 0.8492 | 0.7556 | 0.7556 |
| 0.1454 | 520.0 | 2600 | 0.9122 | 0.7625 | 0.7630 |
| 0.1307 | 560.0 | 2800 | 0.9368 | 0.7580 | 0.7580 |
| 0.1174 | 600.0 | 3000 | 0.9777 | 0.7530 | 0.7531 |
| 0.1062 | 640.0 | 3200 | 1.0339 | 0.7526 | 0.7531 |
| 0.1 | 680.0 | 3400 | 1.0108 | 0.7531 | 0.7531 |
| 0.0915 | 720.0 | 3600 | 1.0380 | 0.7567 | 0.7568 |
| 0.0838 | 760.0 | 3800 | 1.0727 | 0.7592 | 0.7593 |
| 0.0785 | 800.0 | 4000 | 1.1000 | 0.7514 | 0.7519 |
| 0.0754 | 840.0 | 4200 | 1.0992 | 0.7553 | 0.7556 |
| 0.0689 | 880.0 | 4400 | 1.1460 | 0.7491 | 0.7494 |
| 0.0657 | 920.0 | 4600 | 1.1598 | 0.7494 | 0.7494 |
| 0.0629 | 960.0 | 4800 | 1.1911 | 0.7554 | 0.7556 |
| 0.0588 | 1000.0 | 5000 | 1.1959 | 0.7479 | 0.7481 |
| 0.057 | 1040.0 | 5200 | 1.1908 | 0.7542 | 0.7543 |
| 0.0539 | 1080.0 | 5400 | 1.2467 | 0.7578 | 0.7580 |
| 0.0509 | 1120.0 | 5600 | 1.2427 | 0.7578 | 0.7580 |
| 0.0505 | 1160.0 | 5800 | 1.2383 | 0.7530 | 0.7531 |
| 0.0474 | 1200.0 | 6000 | 1.2852 | 0.7543 | 0.7543 |
| 0.0464 | 1240.0 | 6200 | 1.2793 | 0.7590 | 0.7593 |
| 0.043 | 1280.0 | 6400 | 1.3157 | 0.7592 | 0.7593 |
| 0.0429 | 1320.0 | 6600 | 1.2902 | 0.7578 | 0.7580 |
| 0.0423 | 1360.0 | 6800 | 1.3206 | 0.7530 | 0.7531 |
| 0.04 | 1400.0 | 7000 | 1.3201 | 0.7578 | 0.7580 |
| 0.0395 | 1440.0 | 7200 | 1.3319 | 0.7603 | 0.7605 |
| 0.0392 | 1480.0 | 7400 | 1.3190 | 0.7603 | 0.7605 |
| 0.0374 | 1520.0 | 7600 | 1.3765 | 0.7529 | 0.7531 |
| 0.0371 | 1560.0 | 7800 | 1.3795 | 0.7504 | 0.7506 |
| 0.0348 | 1600.0 | 8000 | 1.3803 | 0.7529 | 0.7531 |
| 0.035 | 1640.0 | 8200 | 1.3693 | 0.7541 | 0.7543 |
| 0.0342 | 1680.0 | 8400 | 1.3924 | 0.7565 | 0.7568 |
| 0.0333 | 1720.0 | 8600 | 1.3872 | 0.7516 | 0.7519 |
| 0.0335 | 1760.0 | 8800 | 1.3740 | 0.7516 | 0.7519 |
| 0.0323 | 1800.0 | 9000 | 1.3980 | 0.7541 | 0.7543 |
| 0.0323 | 1840.0 | 9200 | 1.3897 | 0.7504 | 0.7506 |
| 0.0317 | 1880.0 | 9400 | 1.3950 | 0.7529 | 0.7531 |
| 0.0321 | 1920.0 | 9600 | 1.3941 | 0.7541 | 0.7543 |
| 0.0302 | 1960.0 | 9800 | 1.4041 | 0.7553 | 0.7556 |
| 0.0307 | 2000.0 | 10000 | 1.4035 | 0.7540 | 0.7543 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_17M", "model-index": [{"name": "GUE_mouse_0-seqsight_8192_512_17M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_mouse_0-seqsight_8192_512_17M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_17M",
"region:us"
] | null |
2024-04-16T02:08:29+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us
|
GUE\_mouse\_0-seqsight\_8192\_512\_17M-L32\_all
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_17M on the mahdibaghbanzadeh/GUE\_mouse\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1244
* F1 Score: 0.7210
* Accuracy: 0.7210
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 1536
* eval\_batch\_size: 1536
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null |
transformers
|
## install
```
pip install torch numpy transformers datasets tiktoken wandb tqdm
```
Dependencies:
- [pytorch](https://pytorch.org) <3
- [numpy](https://numpy.org/install/) <3
- `transformers` for huggingface transformers <3 (to load GPT-2 checkpoints)
- `datasets` for huggingface datasets <3 (if you want to download + preprocess OpenWebText)
- `tiktoken` for OpenAI's fast BPE code <3
- `wandb` for optional logging <3
- `tqdm` for progress bars <3
## quick start
Inference:
```
$ python inference.py
```
## Thanks
[Zero To Hero series](https://karpathy.ai/zero-to-hero.html). Specifically, the [GPT video](https://www.youtube.com/watch?v=kCc8FmEb1nY) is popular if you have some prior language modeling context.
|
{}
|
vincentoh/gpt2-124m-redpjs
| null |
[
"transformers",
"pytorch",
"gpt2",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T02:08:40+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #endpoints_compatible #text-generation-inference #region-us
|
## install
Dependencies:
- pytorch <3
- numpy <3
- 'transformers' for huggingface transformers <3 (to load GPT-2 checkpoints)
- 'datasets' for huggingface datasets <3 (if you want to download + preprocess OpenWebText)
- 'tiktoken' for OpenAI's fast BPE code <3
- 'wandb' for optional logging <3
- 'tqdm' for progress bars <3
## quick start
Inference:
## Thanks
Zero To Hero series. Specifically, the GPT video is popular if you have some prior language modeling context.
|
[
"## install\n\n\n\nDependencies:\n\n- pytorch <3\n- numpy <3\n- 'transformers' for huggingface transformers <3 (to load GPT-2 checkpoints)\n- 'datasets' for huggingface datasets <3 (if you want to download + preprocess OpenWebText)\n- 'tiktoken' for OpenAI's fast BPE code <3\n- 'wandb' for optional logging <3\n- 'tqdm' for progress bars <3",
"## quick start\n\nInference:",
"## Thanks\n Zero To Hero series. Specifically, the GPT video is popular if you have some prior language modeling context."
] |
[
"TAGS\n#transformers #pytorch #gpt2 #endpoints_compatible #text-generation-inference #region-us \n",
"## install\n\n\n\nDependencies:\n\n- pytorch <3\n- numpy <3\n- 'transformers' for huggingface transformers <3 (to load GPT-2 checkpoints)\n- 'datasets' for huggingface datasets <3 (if you want to download + preprocess OpenWebText)\n- 'tiktoken' for OpenAI's fast BPE code <3\n- 'wandb' for optional logging <3\n- 'tqdm' for progress bars <3",
"## quick start\n\nInference:",
"## Thanks\n Zero To Hero series. Specifically, the GPT video is popular if you have some prior language modeling context."
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small for Quran Recognition
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Quran_Reciters dataset.
It achieves the following results on the evaluation set:
- epoch: 1.6474
- eval_loss: 0.0829
- eval_runtime: 2832.7593
- eval_samples_per_second: 1.428
- eval_steps_per_second: 0.179
- eval_wer: 14.8450
- step: 1000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.1.2
- Datasets 2.17.1
- Tokenizers 0.15.1
|
{"language": ["ara"], "license": "apache-2.0", "tags": ["hf-asr-leaderboard", "generated_from_trainer"], "datasets": ["AsemBadr/GP"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper Small for Quran Recognition", "results": []}]}
|
AsemBadr/the-final-whisper
| null |
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ara",
"dataset:AsemBadr/GP",
"base_model:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T02:09:34+00:00
|
[] |
[
"ara"
] |
TAGS
#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #hf-asr-leaderboard #generated_from_trainer #ara #dataset-AsemBadr/GP #base_model-openai/whisper-small #license-apache-2.0 #endpoints_compatible #region-us
|
# Whisper Small for Quran Recognition
This model is a fine-tuned version of openai/whisper-small on the Quran_Reciters dataset.
It achieves the following results on the evaluation set:
- epoch: 1.6474
- eval_loss: 0.0829
- eval_runtime: 2832.7593
- eval_samples_per_second: 1.428
- eval_steps_per_second: 0.179
- eval_wer: 14.8450
- step: 1000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.1.2
- Datasets 2.17.1
- Tokenizers 0.15.1
|
[
"# Whisper Small for Quran Recognition\n\nThis model is a fine-tuned version of openai/whisper-small on the Quran_Reciters dataset.\nIt achieves the following results on the evaluation set:\n- epoch: 1.6474\n- eval_loss: 0.0829\n- eval_runtime: 2832.7593\n- eval_samples_per_second: 1.428\n- eval_steps_per_second: 0.179\n- eval_wer: 14.8450\n- step: 1000",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- training_steps: 5000\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.40.0.dev0\n- Pytorch 2.1.2\n- Datasets 2.17.1\n- Tokenizers 0.15.1"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #hf-asr-leaderboard #generated_from_trainer #ara #dataset-AsemBadr/GP #base_model-openai/whisper-small #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Whisper Small for Quran Recognition\n\nThis model is a fine-tuned version of openai/whisper-small on the Quran_Reciters dataset.\nIt achieves the following results on the evaluation set:\n- epoch: 1.6474\n- eval_loss: 0.0829\n- eval_runtime: 2832.7593\n- eval_samples_per_second: 1.428\n- eval_steps_per_second: 0.179\n- eval_wer: 14.8450\n- step: 1000",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- training_steps: 5000\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.40.0.dev0\n- Pytorch 2.1.2\n- Datasets 2.17.1\n- Tokenizers 0.15.1"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ruBert-base-sberquad-0.005-len_2-filtered-negative-v2
This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 7000
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "ai-forever/ruBert-base", "model-index": [{"name": "ruBert-base-sberquad-0.005-len_2-filtered-negative-v2", "results": []}]}
|
Shalazary/ruBert-base-sberquad-0.005-len_2-filtered-negative-v2
| null |
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:ai-forever/ruBert-base",
"license:apache-2.0",
"region:us"
] | null |
2024-04-16T02:10:03+00:00
|
[] |
[] |
TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us
|
# ruBert-base-sberquad-0.005-len_2-filtered-negative-v2
This model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 7000
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# ruBert-base-sberquad-0.005-len_2-filtered-negative-v2\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 7000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us \n",
"# ruBert-base-sberquad-0.005-len_2-filtered-negative-v2\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 7000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null |
transformers
|
# Uploaded model
- **Developed by:** codesagar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"}
|
codesagar/prompt-guard-classification-v1
| null |
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T02:11:45+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: codesagar
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: codesagar\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/CultriX/MonaCeption-7B-SLERP-DPO
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-DPO-GGUF/resolve/main/MonaCeption-7B-SLERP-DPO.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-DPO-GGUF/resolve/main/MonaCeption-7B-SLERP-DPO.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-DPO-GGUF/resolve/main/MonaCeption-7B-SLERP-DPO.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-DPO-GGUF/resolve/main/MonaCeption-7B-SLERP-DPO.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-DPO-GGUF/resolve/main/MonaCeption-7B-SLERP-DPO.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-DPO-GGUF/resolve/main/MonaCeption-7B-SLERP-DPO.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-DPO-GGUF/resolve/main/MonaCeption-7B-SLERP-DPO.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-DPO-GGUF/resolve/main/MonaCeption-7B-SLERP-DPO.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-DPO-GGUF/resolve/main/MonaCeption-7B-SLERP-DPO.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-DPO-GGUF/resolve/main/MonaCeption-7B-SLERP-DPO.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-DPO-GGUF/resolve/main/MonaCeption-7B-SLERP-DPO.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-DPO-GGUF/resolve/main/MonaCeption-7B-SLERP-DPO.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-DPO-GGUF/resolve/main/MonaCeption-7B-SLERP-DPO.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MonaCeption-7B-SLERP-DPO-GGUF/resolve/main/MonaCeption-7B-SLERP-DPO.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "library_name": "transformers", "base_model": "CultriX/MonaCeption-7B-SLERP-DPO", "quantized_by": "mradermacher"}
|
mradermacher/MonaCeption-7B-SLERP-DPO-GGUF
| null |
[
"transformers",
"gguf",
"en",
"base_model:CultriX/MonaCeption-7B-SLERP-DPO",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T02:12:27+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #en #base_model-CultriX/MonaCeption-7B-SLERP-DPO #endpoints_compatible #region-us
|
About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #gguf #en #base_model-CultriX/MonaCeption-7B-SLERP-DPO #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
This model achieved 0.7600 AUROC for the [Online Safety Prize Challenge](https://ospc.aisingapore.org/) organized by AI Singapore. It is fine-tuned from [Openchat model](https://huggingface.co/openchat/openchat-3.5-0106). Note, this model is already quantized to 4-bit GPTQ.
## Training details
This model is trained on a synthetic dataset generated by Apache 2.0 licensed LLMs (Mistral and Mixtral).
## How to use
You can either use it to classify text or text + extracted visual description.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"
model = AutoModelForCausalLM.from_pretrained(
'billcai/openchat-model-ospc',
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained('billcai/openchat-model-ospc')
promptText = "<the prompt to be tested>"
# promptText = "Image: <image caption>, Caption: <text within image>"
prompt = """Check if there is unsafe content in the following message. Unsafe content is defined in the following categories.
<BEGIN UNSAFE CONTENT CATEGORIES>
Contains hate speech and harmful content. Harmful content centres on divisive and contentious subject matters. Typical areas include:
- Racial disparities
- Religious beliefs and practices
- Sexual orientation
- Nationalistic sentiments and xenophobia against migrants
- Socio-economic divides and class struggles
- Age-related biases
- Gender discrimination
- Discrimination based on illnesses and disabilities
These content may be based on Singapore context, including the use of non-English languages like Mandarin, Malay and Tamil.
<END UNSAFE CONTENT CATEGORIES>
This is the message you are checking if it contains unsafe content.
<BEGIN MESSAGE>
{}
<END MESSAGE>
Provide your safety assessment for the message above. Your first line MUST read 'safe' or 'unsafe'.
""".format(
promptText
)
messages = [
{"role": "system", "content": "You are a helpful assistant deciding whether a piece of content is safe or unsafe based on specific criteria. The content may contain irrelevant words as it has been extracted from an image."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to('cuda')
generation_output = llm.generate(
model_inputs.input_ids,
max_new_tokens=10,
temperature=0.1,
output_logits=True,
return_dict_in_generate=True
)
generated_sequences = generation_output['sequences']
generated_logits = generation_output['logits']
unsafeTokenId = tokenizer.encode('unsafe')[1]
safeTokenId = tokenizer.encode('safe')[1]
firstLogit = generated_logits[0].cpu().numpy()
prob = softmax([
firstLogit[0,unsafeTokenId],
firstLogit[0,safeTokenId],
])
print(prob) # first is score for unsafe token.
```
# License
Apache 2.0
|
{"language": ["en", "zh", "ms", "ta"], "license": "apache-2.0", "tags": ["multilingual", "mistral", "sft", "chat", "instruction", "gptq"], "datasets": ["billcai/ospc-dataset-v2"], "widget": [{"text": "Hello World", "example_title": "Sample prompt"}], "base_model": "openchat/openchat-3.5-0106"}
|
goldbach7/openchat-model-ospc
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"multilingual",
"sft",
"chat",
"instruction",
"gptq",
"conversational",
"en",
"zh",
"ms",
"ta",
"dataset:billcai/ospc-dataset-v2",
"base_model:openchat/openchat-3.5-0106",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null |
2024-04-16T02:19:17+00:00
|
[] |
[
"en",
"zh",
"ms",
"ta"
] |
TAGS
#transformers #safetensors #mistral #text-generation #multilingual #sft #chat #instruction #gptq #conversational #en #zh #ms #ta #dataset-billcai/ospc-dataset-v2 #base_model-openchat/openchat-3.5-0106 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
This model achieved 0.7600 AUROC for the Online Safety Prize Challenge organized by AI Singapore. It is fine-tuned from Openchat model. Note, this model is already quantized to 4-bit GPTQ.
## Training details
This model is trained on a synthetic dataset generated by Apache 2.0 licensed LLMs (Mistral and Mixtral).
## How to use
You can either use it to classify text or text + extracted visual description.
# License
Apache 2.0
|
[
"## Training details\nThis model is trained on a synthetic dataset generated by Apache 2.0 licensed LLMs (Mistral and Mixtral).",
"## How to use\n\nYou can either use it to classify text or text + extracted visual description.",
"# License\n\nApache 2.0"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #multilingual #sft #chat #instruction #gptq #conversational #en #zh #ms #ta #dataset-billcai/ospc-dataset-v2 #base_model-openchat/openchat-3.5-0106 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"## Training details\nThis model is trained on a synthetic dataset generated by Apache 2.0 licensed LLMs (Mistral and Mixtral).",
"## How to use\n\nYou can either use it to classify text or text + extracted visual description.",
"# License\n\nApache 2.0"
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/HachiML/Swallow-MS-7b-v0.1-ChatSkill-Wizard
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatSkill-Wizard-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatSkill-Wizard.Q2_K.gguf) | Q2_K | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatSkill-Wizard-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatSkill-Wizard.IQ3_XS.gguf) | IQ3_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatSkill-Wizard-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatSkill-Wizard.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatSkill-Wizard-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatSkill-Wizard.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatSkill-Wizard-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatSkill-Wizard.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatSkill-Wizard-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatSkill-Wizard.Q3_K_M.gguf) | Q3_K_M | 3.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatSkill-Wizard-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatSkill-Wizard.Q3_K_L.gguf) | Q3_K_L | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatSkill-Wizard-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatSkill-Wizard.IQ4_XS.gguf) | IQ4_XS | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatSkill-Wizard-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatSkill-Wizard.Q4_K_S.gguf) | Q4_K_S | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatSkill-Wizard-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatSkill-Wizard.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatSkill-Wizard-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatSkill-Wizard.Q5_K_S.gguf) | Q5_K_S | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatSkill-Wizard-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatSkill-Wizard.Q5_K_M.gguf) | Q5_K_M | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatSkill-Wizard-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatSkill-Wizard.Q6_K.gguf) | Q6_K | 6.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-v0.1-ChatSkill-Wizard-GGUF/resolve/main/Swallow-MS-7b-v0.1-ChatSkill-Wizard.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "library_name": "transformers", "tags": [], "base_model": "HachiML/Swallow-MS-7b-v0.1-ChatSkill-Wizard", "quantized_by": "mradermacher"}
|
mradermacher/Swallow-MS-7b-v0.1-ChatSkill-Wizard-GGUF
| null |
[
"transformers",
"gguf",
"en",
"base_model:HachiML/Swallow-MS-7b-v0.1-ChatSkill-Wizard",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T02:21:11+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #en #base_model-HachiML/Swallow-MS-7b-v0.1-ChatSkill-Wizard #endpoints_compatible #region-us
|
About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #gguf #en #base_model-HachiML/Swallow-MS-7b-v0.1-ChatSkill-Wizard #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta_base_0414_github_cybersecurity_READMEs
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4829
- Accuracy: 0.5630
- F1 Score: 0.2677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 2.8358 | 1.0 | 1627 | 2.8923 | 0.5156 | 0.2331 |
| 2.4606 | 2.0 | 3254 | 2.5811 | 0.5558 | 0.2696 |
| 2.3278 | 3.0 | 4881 | 2.4905 | 0.5637 | 0.2739 |
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilroberta-base", "model-index": [{"name": "distilroberta_base_0414_github_cybersecurity_READMEs", "results": []}]}
|
zhijunjunlin/distilroberta_base_0414_github_cybersecurity_READMEs
| null |
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T02:21:55+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-distilroberta-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilroberta\_base\_0414\_github\_cybersecurity\_READMEs
=========================================================
This model is a fine-tuned version of distilroberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.4829
* Accuracy: 0.5630
* F1 Score: 0.2677
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2000
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.0.dev0
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-distilroberta-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
peft
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.0
|
{"library_name": "peft", "base_model": "Viet-Mistral/Vistral-7B-Chat"}
|
chitb/LaVy-instruct
| null |
[
"peft",
"tensorboard",
"safetensors",
"llava_mistral",
"arxiv:1910.09700",
"base_model:Viet-Mistral/Vistral-7B-Chat",
"region:us"
] | null |
2024-04-16T02:23:11+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#peft #tensorboard #safetensors #llava_mistral #arxiv-1910.09700 #base_model-Viet-Mistral/Vistral-7B-Chat #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.9.0
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.9.0"
] |
[
"TAGS\n#peft #tensorboard #safetensors #llava_mistral #arxiv-1910.09700 #base_model-Viet-Mistral/Vistral-7B-Chat #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.9.0"
] |
depth-estimation
|
transformers
|
# natural_science_model
natural_science_model is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased)
* [KM4STfulltext/SSCI-SciBERT-e4](https://huggingface.co/KM4STfulltext/SSCI-SciBERT-e4)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: google-bert/bert-base-uncased
layer_range: [0, 32]
- model: KM4STfulltext/SSCI-SciBERT-e4
layer_range: [0, 32]
merge_method: slerp
base_model: google-bert/bert-base-uncased
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "nagayama0706/natural_science_model"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "google-bert/bert-base-uncased", "KM4STfulltext/SSCI-SciBERT-e4"], "base_model": ["google-bert/bert-base-uncased", "KM4STfulltext/SSCI-SciBERT-e4"], "pipeline_tag": "depth-estimation"}
|
nagayama0706/natural_science_model
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"google-bert/bert-base-uncased",
"KM4STfulltext/SSCI-SciBERT-e4",
"depth-estimation",
"base_model:google-bert/bert-base-uncased",
"base_model:KM4STfulltext/SSCI-SciBERT-e4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T02:28:32+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #google-bert/bert-base-uncased #KM4STfulltext/SSCI-SciBERT-e4 #depth-estimation #base_model-google-bert/bert-base-uncased #base_model-KM4STfulltext/SSCI-SciBERT-e4 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# natural_science_model
natural_science_model is a merge of the following models using LazyMergekit:
* google-bert/bert-base-uncased
* KM4STfulltext/SSCI-SciBERT-e4
## Configuration
## Usage
|
[
"# natural_science_model\n\nnatural_science_model is a merge of the following models using LazyMergekit:\n* google-bert/bert-base-uncased\n* KM4STfulltext/SSCI-SciBERT-e4",
"## Configuration",
"## Usage"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #google-bert/bert-base-uncased #KM4STfulltext/SSCI-SciBERT-e4 #depth-estimation #base_model-google-bert/bert-base-uncased #base_model-KM4STfulltext/SSCI-SciBERT-e4 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# natural_science_model\n\nnatural_science_model is a merge of the following models using LazyMergekit:\n* google-bert/bert-base-uncased\n* KM4STfulltext/SSCI-SciBERT-e4",
"## Configuration",
"## Usage"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "my_awesome_model", "results": []}]}
|
lxl2023/my_awesome_model
| null |
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T02:31:29+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# my_awesome_model
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# my_awesome_model\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# my_awesome_model\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
token-classification
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
AwesomeREK/concept-extraction-xlnet
| null |
[
"transformers",
"safetensors",
"xlnet",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T02:33:11+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #xlnet #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #xlnet #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
peft
|
# Training
```
deepspeed --include=node-0:2 sft_fix_target_modules.py --deepspeed dp_zero0.json \
--model_name_or_path="meta-llama/Llama-2-7b-chat-hf" \
--dataset_name="timdettmers/openassistant-guanaco" \
--dataset_text_field="text" \
--report_to="tensorboard" \
--learning_rate=1e-5 \
--per_device_train_batch_size=32 \
--gradient_accumulation_steps=4 \
--output_dir="guanaco_Llama-2-7b-chat-hf_lora" \
--logging_steps=1 \
--num_train_epochs=15 \
--max_steps=-1 \
--gradient_checkpointing \
--fp16 \
--save_steps=0.3 \
--use_peft \
--lora_r=64 \
--lora_alpha=16
```
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0
|
{"license": "apache-2.0", "library_name": "peft", "base_model": "meta-llama/Llama-2-7b-chat-hf"}
|
tricktreat/Llama-2-7b-chat-hf-guanaco-lora
| null |
[
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"license:apache-2.0",
"region:us"
] | null |
2024-04-16T02:33:45+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#peft #tensorboard #safetensors #arxiv-1910.09700 #base_model-meta-llama/Llama-2-7b-chat-hf #license-apache-2.0 #region-us
|
# Training
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.1.dev0
|
[
"# Training",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] |
[
"TAGS\n#peft #tensorboard #safetensors #arxiv-1910.09700 #base_model-meta-llama/Llama-2-7b-chat-hf #license-apache-2.0 #region-us \n",
"# Training",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] |
null |
transformers
|
# Uploaded model
- **Developed by:** eruzak
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"}
|
eruzak/unsloth_mistral_predict_prompt_RL_v8
| null |
[
"transformers",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T02:34:28+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: eruzak
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: eruzak\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: eruzak\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null |
transformers
|
# Uploaded model
- **Developed by:** eruzak
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"}
|
eruzak/unsloth_mistral_predict_prompt_RL_v9
| null |
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T02:35:01+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: eruzak
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: eruzak\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: eruzak\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null |
peft
|
# Training
```
deepspeed --include=node-0:3 --master_port=12001 sft_prompt_tuning.py --deepspeed dp_zero0.json \
--model_name_or_path="meta-llama/Llama-2-7b-chat-hf" \
--dataset_name="timdettmers/openassistant-guanaco" \
--dataset_text_field="text" \
--report_to="tensorboard" \
--learning_rate=1e-5 \
--per_device_train_batch_size=32 \
--gradient_accumulation_steps=4 \
--output_dir="guanaco_Llama-2-7b-chat-hf_prompttuning" \
--logging_steps=1 \
--num_train_epochs=15 \
--max_steps=-1 \
--save_steps=0.3 \
--gradient_checkpointing \
--fp16
```
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0
|
{"license": "apache-2.0", "library_name": "peft", "base_model": "meta-llama/Llama-2-7b-chat-hf"}
|
tricktreat/Llama-2-7b-chat-hf-guanaco-prompttuning
| null |
[
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"license:apache-2.0",
"region:us"
] | null |
2024-04-16T02:36:03+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#peft #tensorboard #safetensors #arxiv-1910.09700 #base_model-meta-llama/Llama-2-7b-chat-hf #license-apache-2.0 #region-us
|
# Training
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.1.dev0
|
[
"# Training",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] |
[
"TAGS\n#peft #tensorboard #safetensors #arxiv-1910.09700 #base_model-meta-llama/Llama-2-7b-chat-hf #license-apache-2.0 #region-us \n",
"# Training",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] |
text-classification
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Developer
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** Bert-base-cased
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"language": ["en"], "library_name": "transformers"}
|
AbhijitShejal/my_bert_model
| null |
[
"transformers",
"safetensors",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T02:36:34+00:00
|
[
"1910.09700"
] |
[
"en"
] |
TAGS
#transformers #safetensors #bert #text-classification #en #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: Developer
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]: Bert-base-cased
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: Developer\n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]: Bert-base-cased",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #bert #text-classification #en #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: Developer\n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]: Bert-base-cased",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_1-seqsight_8192_512_17M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_17M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_17M) on the [mahdibaghbanzadeh/GUE_mouse_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2373
- F1 Score: 0.8958
- Accuracy: 0.8958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 1536
- eval_batch_size: 1536
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3865 | 5.56 | 200 | 0.2775 | 0.8773 | 0.8774 |
| 0.2937 | 11.11 | 400 | 0.2644 | 0.8854 | 0.8854 |
| 0.275 | 16.67 | 600 | 0.2507 | 0.8895 | 0.8895 |
| 0.2619 | 22.22 | 800 | 0.2448 | 0.8914 | 0.8915 |
| 0.2508 | 27.78 | 1000 | 0.2601 | 0.8876 | 0.8876 |
| 0.2427 | 33.33 | 1200 | 0.2402 | 0.8935 | 0.8936 |
| 0.2391 | 38.89 | 1400 | 0.2352 | 0.8977 | 0.8977 |
| 0.2333 | 44.44 | 1600 | 0.2347 | 0.8965 | 0.8965 |
| 0.2305 | 50.0 | 1800 | 0.2366 | 0.8981 | 0.8981 |
| 0.226 | 55.56 | 2000 | 0.2354 | 0.8954 | 0.8955 |
| 0.2225 | 61.11 | 2200 | 0.2350 | 0.8966 | 0.8967 |
| 0.2214 | 66.67 | 2400 | 0.2407 | 0.8974 | 0.8974 |
| 0.217 | 72.22 | 2600 | 0.2365 | 0.8983 | 0.8983 |
| 0.2136 | 77.78 | 2800 | 0.2342 | 0.8968 | 0.8968 |
| 0.212 | 83.33 | 3000 | 0.2358 | 0.8976 | 0.8976 |
| 0.2097 | 88.89 | 3200 | 0.2419 | 0.8952 | 0.8952 |
| 0.2068 | 94.44 | 3400 | 0.2368 | 0.8986 | 0.8986 |
| 0.2051 | 100.0 | 3600 | 0.2334 | 0.9014 | 0.9014 |
| 0.2032 | 105.56 | 3800 | 0.2370 | 0.8998 | 0.8998 |
| 0.2003 | 111.11 | 4000 | 0.2458 | 0.8972 | 0.8973 |
| 0.199 | 116.67 | 4200 | 0.2399 | 0.8996 | 0.8996 |
| 0.1968 | 122.22 | 4400 | 0.2381 | 0.8976 | 0.8976 |
| 0.1952 | 127.78 | 4600 | 0.2400 | 0.8992 | 0.8992 |
| 0.1949 | 133.33 | 4800 | 0.2373 | 0.9011 | 0.9011 |
| 0.1906 | 138.89 | 5000 | 0.2411 | 0.8959 | 0.8959 |
| 0.1901 | 144.44 | 5200 | 0.2493 | 0.8962 | 0.8962 |
| 0.1884 | 150.0 | 5400 | 0.2433 | 0.9011 | 0.9011 |
| 0.187 | 155.56 | 5600 | 0.2464 | 0.8992 | 0.8992 |
| 0.1846 | 161.11 | 5800 | 0.2452 | 0.8990 | 0.8990 |
| 0.184 | 166.67 | 6000 | 0.2462 | 0.8992 | 0.8992 |
| 0.1828 | 172.22 | 6200 | 0.2433 | 0.8981 | 0.8981 |
| 0.1805 | 177.78 | 6400 | 0.2462 | 0.8976 | 0.8976 |
| 0.1807 | 183.33 | 6600 | 0.2462 | 0.8979 | 0.8979 |
| 0.1785 | 188.89 | 6800 | 0.2501 | 0.8971 | 0.8971 |
| 0.178 | 194.44 | 7000 | 0.2553 | 0.8966 | 0.8967 |
| 0.1769 | 200.0 | 7200 | 0.2478 | 0.8977 | 0.8977 |
| 0.1762 | 205.56 | 7400 | 0.2506 | 0.8989 | 0.8989 |
| 0.1757 | 211.11 | 7600 | 0.2499 | 0.8989 | 0.8989 |
| 0.174 | 216.67 | 7800 | 0.2534 | 0.8973 | 0.8973 |
| 0.1734 | 222.22 | 8000 | 0.2520 | 0.8977 | 0.8977 |
| 0.172 | 227.78 | 8200 | 0.2528 | 0.8976 | 0.8976 |
| 0.172 | 233.33 | 8400 | 0.2534 | 0.8964 | 0.8964 |
| 0.1716 | 238.89 | 8600 | 0.2566 | 0.8961 | 0.8961 |
| 0.1708 | 244.44 | 8800 | 0.2549 | 0.8953 | 0.8953 |
| 0.1707 | 250.0 | 9000 | 0.2532 | 0.8962 | 0.8962 |
| 0.1696 | 255.56 | 9200 | 0.2557 | 0.8953 | 0.8953 |
| 0.1688 | 261.11 | 9400 | 0.2542 | 0.8981 | 0.8981 |
| 0.1688 | 266.67 | 9600 | 0.2541 | 0.8974 | 0.8974 |
| 0.1689 | 272.22 | 9800 | 0.2553 | 0.8970 | 0.8970 |
| 0.1679 | 277.78 | 10000 | 0.2547 | 0.8967 | 0.8967 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_17M", "model-index": [{"name": "GUE_mouse_1-seqsight_8192_512_17M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_mouse_1-seqsight_8192_512_17M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_17M",
"region:us"
] | null |
2024-04-16T02:36:46+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us
|
GUE\_mouse\_1-seqsight\_8192\_512\_17M-L32\_all
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_17M on the mahdibaghbanzadeh/GUE\_mouse\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2373
* F1 Score: 0.8958
* Accuracy: 0.8958
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 1536
* eval\_batch\_size: 1536
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
<p style="font-size:20px;" align="center">
🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
## See [here](https://huggingface.co/lucyknada/microsoft_WizardLM-2-7B) for the WizardLM-2-7B re-upload.
## News 🔥🔥🔥 [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our [release blog post](https://web.archive.org/web/20240415221214/https://wizardlm.github.io/WizardLM2/) and upcoming paper.
## Model Details
* **Model name**: WizardLM-2 8x22B
* **Developed by**: WizardLM@Microsoft AI
* **Model type**: Mixture of Experts (MoE)
* **Base model**: [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1)
* **Parameters**: 141B
* **Language(s)**: Multilingual
* **Blog**: [Introducing WizardLM-2](https://web.archive.org/web/20240415221214/https://wizardlm.github.io/WizardLM2/)
* **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM)
* **Paper**: WizardLM-2 (Upcoming)
* **License**: Apache2.0
## Model Capacities
**MT-Bench**
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="https://web.archive.org/web/20240415175608im_/https://wizardlm.github.io/WizardLM2/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
**Human Preferences Evaluation**
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="https://web.archive.org/web/20240415163303im_/https://wizardlm.github.io/WizardLM2/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://web.archive.org/web/20240415221214/https://wizardlm.github.io/WizardLM2/) for more details of this system.
<p align="center" width="100%">
<a ><img src="https://web.archive.org/web/20240415163303im_/https://wizardlm.github.io/WizardLM2/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
❗<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful,
detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
USER: Who are you? ASSISTANT: I am WizardLM.</s>......
```
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
|
{"license": "apache-2.0"}
|
alpindale/WizardLM-2-8x22B
| null |
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T02:36:59+00:00
|
[
"2304.12244",
"2306.08568",
"2308.09583"
] |
[] |
TAGS
#transformers #safetensors #mixtral #text-generation #conversational #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
<p style="font-size:20px;" align="center">
<a href="URL target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
<a href="URL target="_blank">HF Repo</a> • <a href="URL target="_blank">Github Repo</a> • <a href="URL target="_blank">Twitter</a> • <a href="URL target="_blank">[WizardLM]</a> • <a href="URL target="_blank">[WizardCoder]</a> • <a href="URL target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
Join our <a href="URL target="_blank">Discord</a>
</p>
## See here for the WizardLM-2-7B re-upload.
## News [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our release blog post and upcoming paper.
## Model Details
* Model name: WizardLM-2 8x22B
* Developed by: WizardLM@Microsoft AI
* Model type: Mixture of Experts (MoE)
* Base model: mistral-community/Mixtral-8x22B-v0.1
* Parameters: 141B
* Language(s): Multilingual
* Blog: Introducing WizardLM-2
* Repository: URL
* Paper: WizardLM-2 (Upcoming)
* License: Apache2.0
## Model Capacities
MT-Bench
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="URL/URL alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
Human Preferences Evaluation
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="URL/URL alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.
<p align="center" width="100%">
<a ><img src="URL/URL alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo code on our github.
|
[
"## See here for the WizardLM-2-7B re-upload.",
"## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.",
"## Model Details\n\n* Model name: WizardLM-2 8x22B\n* Developed by: WizardLM@Microsoft AI\n* Model type: Mixture of Experts (MoE)\n* Base model: mistral-community/Mixtral-8x22B-v0.1\n* Parameters: 141B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0",
"## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL/URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL/URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL/URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github."
] |
[
"TAGS\n#transformers #safetensors #mixtral #text-generation #conversational #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"## See here for the WizardLM-2-7B re-upload.",
"## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.",
"## Model Details\n\n* Model name: WizardLM-2 8x22B\n* Developed by: WizardLM@Microsoft AI\n* Model type: Mixture of Experts (MoE)\n* Base model: mistral-community/Mixtral-8x22B-v0.1\n* Parameters: 141B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0",
"## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL/URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL/URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL/URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github."
] |
robotics
|
transformers
|
# administrative_processing_model
administrative_processing_model is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased)
* [sentence-transformers/stsb-xlm-r-multilingual](https://huggingface.co/sentence-transformers/stsb-xlm-r-multilingual)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: google-bert/bert-base-uncased
layer_range: [0, 32]
- model: sentence-transformers/stsb-xlm-r-multilingual
layer_range: [0, 32]
merge_method: slerp
base_model: google-bert/bert-base-uncased
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "nagayama0706/administrative_processing_model"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "google-bert/bert-base-uncased", "sentence-transformers/stsb-xlm-r-multilingual"], "base_model": ["google-bert/bert-base-uncased", "sentence-transformers/stsb-xlm-r-multilingual"], "pipeline_tag": "robotics"}
|
nagayama0706/administrative_processing_model
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"google-bert/bert-base-uncased",
"sentence-transformers/stsb-xlm-r-multilingual",
"robotics",
"base_model:google-bert/bert-base-uncased",
"base_model:sentence-transformers/stsb-xlm-r-multilingual",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T02:38:39+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #google-bert/bert-base-uncased #sentence-transformers/stsb-xlm-r-multilingual #robotics #base_model-google-bert/bert-base-uncased #base_model-sentence-transformers/stsb-xlm-r-multilingual #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# administrative_processing_model
administrative_processing_model is a merge of the following models using LazyMergekit:
* google-bert/bert-base-uncased
* sentence-transformers/stsb-xlm-r-multilingual
## Configuration
## Usage
|
[
"# administrative_processing_model\n\nadministrative_processing_model is a merge of the following models using LazyMergekit:\n* google-bert/bert-base-uncased\n* sentence-transformers/stsb-xlm-r-multilingual",
"## Configuration",
"## Usage"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #google-bert/bert-base-uncased #sentence-transformers/stsb-xlm-r-multilingual #robotics #base_model-google-bert/bert-base-uncased #base_model-sentence-transformers/stsb-xlm-r-multilingual #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# administrative_processing_model\n\nadministrative_processing_model is a merge of the following models using LazyMergekit:\n* google-bert/bert-base-uncased\n* sentence-transformers/stsb-xlm-r-multilingual",
"## Configuration",
"## Usage"
] |
null |
adapter-transformers
|
## Hyperparameter
```bash
deepspeed --include=node-0:2 sft_fix_target_modules.py --deepspeed dp_zero0.json \
--model_name_or_path="guanaco_Llama-2-7b-chat-hf_freeze_embed_tokens_q_v_proj" \
--dataset_name="timdettmers/openassistant-guanaco" \
--dataset_text_field="text" \
--report_to="tensorboard" \
--learning_rate=1e-5 \
--per_device_train_batch_size=32 \
--gradient_accumulation_steps=4 \
--output_dir="guanaco_Llama-2-7b-chat-hf_freeze_embed_tokens_q_v_proj_lora" \
--logging_steps=1 \
--num_train_epochs=15 \
--max_steps=-1 \
--gradient_checkpointing \
--fp16 \
--save_steps=0.3 \
--use_peft \
--lora_r=64 \
--lora_alpha=16
```
## Dataset
`timdettmers/openassistant-guanaco`
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"license": "apache-2.0", "library_name": "adapter-transformers", "datasets": ["timdettmers/openassistant-guanaco"]}
|
tricktreat/Llama-2-7b-chat-hf-guanaco-freeze-embed-tokens-q-v-proj
| null |
[
"adapter-transformers",
"tensorboard",
"safetensors",
"llama",
"dataset:timdettmers/openassistant-guanaco",
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | null |
2024-04-16T02:40:08+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#adapter-transformers #tensorboard #safetensors #llama #dataset-timdettmers/openassistant-guanaco #arxiv-1910.09700 #license-apache-2.0 #region-us
|
## Hyperparameter
## Dataset
'timdettmers/openassistant-guanaco'
# Model Card for Model ID
This modelcard aims to be a base template for new models. It has been generated using this raw template.
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"## Hyperparameter",
"## Dataset\n\n'timdettmers/openassistant-guanaco'",
"# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#adapter-transformers #tensorboard #safetensors #llama #dataset-timdettmers/openassistant-guanaco #arxiv-1910.09700 #license-apache-2.0 #region-us \n",
"## Hyperparameter",
"## Dataset\n\n'timdettmers/openassistant-guanaco'",
"# Model Card for Model ID\n\n\n\nThis modelcard aims to be a base template for new models. It has been generated using this raw template.",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/CmusIT5OlSXvFrbTJ7l-C.png" alt="Taiwan LLM Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# 🌟 Checkout [Taiwan-LLM Demo Chat-UI](http://www.twllm.com) 🌟
# Model Card for Taiwan LLM 7B v2.0.1 chat
Taiwan LLM is an advanced language model tailored for Traditional Chinese, focusing on the linguistic and cultural contexts of Taiwan.
Developed from a large base model, it's enriched with diverse Taiwanese textual sources and refined through Supervised Fine-Tuning.
This model excels in language understanding and generation, aligning closely with Taiwan's cultural nuances.
It demonstrates improved performance on various benchmarks like TC-Eval, showcasing its contextual comprehension and cultural relevance.
For detailed insights into Taiwan LLM's development and features, refer to our [technical report](https://github.com/MiuLab/Taiwan-LLaMa/blob/main/twllm_paper.pdf).
## Model description
- **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily Traditional Chinese (zh-tw)
- **Finetuned from model:** [yentinglin/Taiwan-LLM-7B-v2.0-base](https://huggingface.co/yentinglin/yentinglin/Taiwan-LLM-7B-v2.0-base)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/MiuLab/Taiwan-LLaMa
- **Demo:** https://twllm.com/
## Performance

## Intended uses
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# pip install transformers>=4.34
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="yentinglin/Taiwan-LLM-7B-v2.0.1-chat", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "你是一個人工智慧助理",
},
{"role": "user", "content": "東北季風如何影響台灣氣候?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
### Training hyperparameters



The following hyperparameters were used during training:
- learning_rate: 5e-05
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5.0
## Citation
If you find Taiwan LLM is useful in your work, please cite it with:
```
@misc{lin2023taiwan,
title={Taiwan LLM: Bridging the Linguistic Divide with a Culturally Aligned Language Model},
author={Yen-Ting Lin and Yun-Nung Chen},
year={2023},
eprint={2311.17487},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["zh"], "license": "apache-2.0", "library_name": "transformers", "widget": [{"text": "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: \u4f60\u597d\uff0c\u8acb\u554f\u4f60\u53ef\u4ee5\u5e6b\u6211\u5beb\u4e00\u5c01\u63a8\u85a6\u4fe1\u55ce\uff1f ASSISTANT:"}], "pipeline_tag": "text-generation"}
|
ZoneTwelve/Taiwan-LLM-7B-v2.0.1-chat-GGUF
| null |
[
"transformers",
"gguf",
"text-generation",
"zh",
"arxiv:2311.17487",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T02:40:26+00:00
|
[
"2311.17487"
] |
[
"zh"
] |
TAGS
#transformers #gguf #text-generation #zh #arxiv-2311.17487 #license-apache-2.0 #endpoints_compatible #region-us
|
<img src="URL alt="Taiwan LLM Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Checkout Taiwan-LLM Demo Chat-UI
# Model Card for Taiwan LLM 7B v2.0.1 chat
Taiwan LLM is an advanced language model tailored for Traditional Chinese, focusing on the linguistic and cultural contexts of Taiwan.
Developed from a large base model, it's enriched with diverse Taiwanese textual sources and refined through Supervised Fine-Tuning.
This model excels in language understanding and generation, aligning closely with Taiwan's cultural nuances.
It demonstrates improved performance on various benchmarks like TC-Eval, showcasing its contextual comprehension and cultural relevance.
For detailed insights into Taiwan LLM's development and features, refer to our technical report.
## Model description
- Model type: A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- Language(s) (NLP): Primarily Traditional Chinese (zh-tw)
- Finetuned from model: yentinglin/Taiwan-LLM-7B-v2.0-base
### Model Sources
- Repository: URL
- Demo: URL
## Performance
!image/png
## Intended uses
Here's how you can run the model using the 'pipeline()' function from Transformers:
### Training hyperparameters
!image/png
!image/png
!image/png
The following hyperparameters were used during training:
- learning_rate: 5e-05
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5.0
If you find Taiwan LLM is useful in your work, please cite it with:
|
[
"# Checkout Taiwan-LLM Demo Chat-UI",
"# Model Card for Taiwan LLM 7B v2.0.1 chat\n\nTaiwan LLM is an advanced language model tailored for Traditional Chinese, focusing on the linguistic and cultural contexts of Taiwan. \nDeveloped from a large base model, it's enriched with diverse Taiwanese textual sources and refined through Supervised Fine-Tuning. \nThis model excels in language understanding and generation, aligning closely with Taiwan's cultural nuances. \nIt demonstrates improved performance on various benchmarks like TC-Eval, showcasing its contextual comprehension and cultural relevance. \nFor detailed insights into Taiwan LLM's development and features, refer to our technical report.",
"## Model description\n\n- Model type: A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.\n- Language(s) (NLP): Primarily Traditional Chinese (zh-tw)\n- Finetuned from model: yentinglin/Taiwan-LLM-7B-v2.0-base",
"### Model Sources\n\n\n\n- Repository: URL\n- Demo: URL",
"## Performance\n\n\n!image/png",
"## Intended uses\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:",
"### Training hyperparameters\n\n!image/png\n\n!image/png\n\n\n!image/png\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 5.0\n\nIf you find Taiwan LLM is useful in your work, please cite it with:"
] |
[
"TAGS\n#transformers #gguf #text-generation #zh #arxiv-2311.17487 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Checkout Taiwan-LLM Demo Chat-UI",
"# Model Card for Taiwan LLM 7B v2.0.1 chat\n\nTaiwan LLM is an advanced language model tailored for Traditional Chinese, focusing on the linguistic and cultural contexts of Taiwan. \nDeveloped from a large base model, it's enriched with diverse Taiwanese textual sources and refined through Supervised Fine-Tuning. \nThis model excels in language understanding and generation, aligning closely with Taiwan's cultural nuances. \nIt demonstrates improved performance on various benchmarks like TC-Eval, showcasing its contextual comprehension and cultural relevance. \nFor detailed insights into Taiwan LLM's development and features, refer to our technical report.",
"## Model description\n\n- Model type: A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.\n- Language(s) (NLP): Primarily Traditional Chinese (zh-tw)\n- Finetuned from model: yentinglin/Taiwan-LLM-7B-v2.0-base",
"### Model Sources\n\n\n\n- Repository: URL\n- Demo: URL",
"## Performance\n\n\n!image/png",
"## Intended uses\n\nHere's how you can run the model using the 'pipeline()' function from Transformers:",
"### Training hyperparameters\n\n!image/png\n\n!image/png\n\n\n!image/png\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 5.0\n\nIf you find Taiwan LLM is useful in your work, please cite it with:"
] |
null |
peft
|
# Training
```
deepspeed --include=node-0:2 sft_fix_target_modules.py --deepspeed dp_zero0.json \
--model_name_or_path="guanaco_Llama-2-7b-chat-hf_freeze_embed_tokens_q_v_proj" \
--dataset_name="timdettmers/openassistant-guanaco" \
--dataset_text_field="text" \
--report_to="tensorboard" \
--learning_rate=1e-5 \
--per_device_train_batch_size=32 \
--gradient_accumulation_steps=4 \
--output_dir="guanaco_Llama-2-7b-chat-hf_freeze_embed_tokens_q_v_proj_lora" \
--logging_steps=1 \
--num_train_epochs=15 \
--max_steps=-1 \
--gradient_checkpointing \
--fp16 \
--save_steps=0.3 \
--use_peft \
--lora_r=64 \
--lora_alpha=16
```
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0
|
{"license": "apache-2.0", "library_name": "peft", "base_model": "tricktreat/Llama-2-7b-chat-hf-guanaco-freeze-embed-tokens-q-v-proj"}
|
tricktreat/Llama-2-7b-chat-hf-guanaco-freeze-embed-tokens-q-v-proj-lora
| null |
[
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:tricktreat/Llama-2-7b-chat-hf-guanaco-freeze-embed-tokens-q-v-proj",
"license:apache-2.0",
"region:us"
] | null |
2024-04-16T02:41:12+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#peft #tensorboard #safetensors #arxiv-1910.09700 #base_model-tricktreat/Llama-2-7b-chat-hf-guanaco-freeze-embed-tokens-q-v-proj #license-apache-2.0 #region-us
|
# Training
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.1.dev0
|
[
"# Training",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] |
[
"TAGS\n#peft #tensorboard #safetensors #arxiv-1910.09700 #base_model-tricktreat/Llama-2-7b-chat-hf-guanaco-freeze-embed-tokens-q-v-proj #license-apache-2.0 #region-us \n",
"# Training",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] |
null |
peft
|
# Training
```
deepspeed --include=node-0:3 --master_port=12001 sft_prompt_tuning.py --deepspeed dp_zero0.json \
--model_name_or_path="guanaco_Llama-2-7b-chat-hf_freeze_embed_tokens_q_v_proj" \
--dataset_name="timdettmers/openassistant-guanaco" \
--dataset_text_field="text" \
--report_to="tensorboard" \
--learning_rate=1e-5 \
--per_device_train_batch_size=32 \
--gradient_accumulation_steps=4 \
--output_dir="guanaco_Llama-2-7b-chat-hf_freeze_embed_tokens_q_v_projs_prompttuning" \
--logging_steps=1 \
--num_train_epochs=15 \
--max_steps=-1 \
--save_steps=0.3 \
--gradient_checkpointing \
--fp16
```
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0
|
{"license": "apache-2.0", "library_name": "peft", "base_model": "tricktreat/Llama-2-7b-chat-hf-guanaco-freeze-embed-tokens-q-v-proj"}
|
tricktreat/Llama-2-7b-chat-hf-guanaco-freeze-embed-tokens-q-v-proj-prompttuning
| null |
[
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:tricktreat/Llama-2-7b-chat-hf-guanaco-freeze-embed-tokens-q-v-proj",
"license:apache-2.0",
"region:us"
] | null |
2024-04-16T02:42:08+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#peft #tensorboard #safetensors #arxiv-1910.09700 #base_model-tricktreat/Llama-2-7b-chat-hf-guanaco-freeze-embed-tokens-q-v-proj #license-apache-2.0 #region-us
|
# Training
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.1.dev0
|
[
"# Training",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] |
[
"TAGS\n#peft #tensorboard #safetensors #arxiv-1910.09700 #base_model-tricktreat/Llama-2-7b-chat-hf-guanaco-freeze-embed-tokens-q-v-proj #license-apache-2.0 #region-us \n",
"# Training",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.1.dev0"
] |
text-generation
|
transformers
|
# amazingvince/Not-WizardLM-2-7B
<a href="https://colab.research.google.com/gist/pszemraj/d3d74ceab942722b49188606785e2bfd/not-wizardlm-2-7b-inference.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
Included is code ripped from fastchat with the expected chat templating.
Also wiz.pdf is a pdf of the github blog showing the apache 2 release.
Link to wayback machine included: https://web.archive.org/web/20240415221214/https://wizardlm.github.io/WizardLM2/
## example
```python
import dataclasses
from enum import auto, Enum
from typing import List, Tuple, Any
class SeparatorStyle(Enum):
"""Different separator style."""
SINGLE = auto()
TWO = auto()
@dataclasses.dataclass
class Conversation:
"""A class that keeps all conversation history."""
system: str
roles: List[str]
messages: List[List[str]]
offset: int
sep_style: SeparatorStyle = SeparatorStyle.SINGLE
sep: str = "###"
sep2: str = None
# Used for gradio server
skip_next: bool = False
conv_id: Any = None
def get_prompt(self):
if self.sep_style == SeparatorStyle.SINGLE:
ret = self.system
for role, message in self.messages:
if message:
ret += self.sep + " " + role + ": " + message
else:
ret += self.sep + " " + role + ":"
return ret
elif self.sep_style == SeparatorStyle.TWO:
seps = [self.sep, self.sep2]
ret = self.system + seps[0]
for i, (role, message) in enumerate(self.messages):
if message:
ret += role + ": " + message + seps[i % 2]
else:
ret += role + ":"
return ret
else:
raise ValueError(f"Invalid style: {self.sep_style}")
def append_message(self, role, message):
self.messages.append([role, message])
def to_gradio_chatbot(self):
ret = []
for i, (role, msg) in enumerate(self.messages[self.offset:]):
if i % 2 == 0:
ret.append([msg, None])
else:
ret[-1][-1] = msg
return ret
def copy(self):
return Conversation(
system=self.system,
roles=self.roles,
messages=[[x, y] for x, y in self.messages],
offset=self.offset,
sep_style=self.sep_style,
sep=self.sep,
sep2=self.sep2,
conv_id=self.conv_id)
def dict(self):
return {
"system": self.system,
"roles": self.roles,
"messages": self.messages,
"offset": self.offset,
"sep": self.sep,
"sep2": self.sep2,
"conv_id": self.conv_id,
}
conv = Conversation(
system="A chat between a curious user and an artificial intelligence assistant. "
"The assistant gives helpful, detailed, and polite answers to the user's questions.",
roles=("USER", "ASSISTANT"),
messages=[],
offset=0,
sep_style=SeparatorStyle.TWO,
sep=" ",
sep2="</s>",
)
conv.append_message(conv.roles[0], "Why would Microsoft take this down?")
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
result = model.generate(**inputs, max_new_tokens=1000)
generated_ids = result[0]
generated_text = tokenizer.decode(generated_ids, skip_special_tokens=True)
print(generated_text)
```
|
{"license": "apache-2.0"}
|
amazingvince/Not-WizardLM-2-7B
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T02:43:07+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# amazingvince/Not-WizardLM-2-7B
<a href="URL
<img src="URL alt="Open In Colab"/>
</a>
Included is code ripped from fastchat with the expected chat templating.
Also URL is a pdf of the github blog showing the apache 2 release.
Link to wayback machine included: URL/URL
## example
|
[
"# amazingvince/Not-WizardLM-2-7B\n\n<a href=\"URL\n <img src=\"URL alt=\"Open In Colab\"/>\n</a>\n\nIncluded is code ripped from fastchat with the expected chat templating.\n\nAlso URL is a pdf of the github blog showing the apache 2 release.\nLink to wayback machine included: URL/URL",
"## example"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# amazingvince/Not-WizardLM-2-7B\n\n<a href=\"URL\n <img src=\"URL alt=\"Open In Colab\"/>\n</a>\n\nIncluded is code ripped from fastchat with the expected chat templating.\n\nAlso URL is a pdf of the github blog showing the apache 2 release.\nLink to wayback machine included: URL/URL",
"## example"
] |
null | null |
# Classifiers Enhanced by Pre-training
This project utilizes a visual encoder from the pre-trained CLIP (ViT-B/32) to build image classifiers. To use the trained models, follow the steps below to set up and run the classifiers.
## Prerequisites
Before you start, make sure you have Python and the necessary libraries installed.
## Download the Trained Models and CIFAR-100 Dataset
You need to download the following trained model weights and CIFAR-100 dataset for running the project:
- `fine-tune-best.pth`: Best model weights after fine-tuning.
- `linear-probe-best.pth`: Best model weights after the linear probe training.
- `train-from-scratch-best.pth`: Best model weights trained from scratch.
Please download these files and place them under the `results/` directory within the project folder.
- `cifar-100-python.tar.gz`: CIFAR-100 dataset.
Please download this file and place it under the `data/` directory within the project folder.
## Installation and Usage
See https://github.com/Gengsheng-Li/Classifiers-enhanced-by-pre-training for more details.
|
{"license": "mit"}
|
RyukiRi/Classifiers-Enhanced-by-Pre-training
| null |
[
"license:mit",
"region:us"
] | null |
2024-04-16T02:44:00+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
# Classifiers Enhanced by Pre-training
This project utilizes a visual encoder from the pre-trained CLIP (ViT-B/32) to build image classifiers. To use the trained models, follow the steps below to set up and run the classifiers.
## Prerequisites
Before you start, make sure you have Python and the necessary libraries installed.
## Download the Trained Models and CIFAR-100 Dataset
You need to download the following trained model weights and CIFAR-100 dataset for running the project:
- 'URL': Best model weights after fine-tuning.
- 'URL': Best model weights after the linear probe training.
- 'URL': Best model weights trained from scratch.
Please download these files and place them under the 'results/' directory within the project folder.
- 'URL': CIFAR-100 dataset.
Please download this file and place it under the 'data/' directory within the project folder.
## Installation and Usage
See URL for more details.
|
[
"# Classifiers Enhanced by Pre-training\n\nThis project utilizes a visual encoder from the pre-trained CLIP (ViT-B/32) to build image classifiers. To use the trained models, follow the steps below to set up and run the classifiers.",
"## Prerequisites\n\nBefore you start, make sure you have Python and the necessary libraries installed.",
"## Download the Trained Models and CIFAR-100 Dataset\n\nYou need to download the following trained model weights and CIFAR-100 dataset for running the project:\n- 'URL': Best model weights after fine-tuning.\n- 'URL': Best model weights after the linear probe training.\n- 'URL': Best model weights trained from scratch.\n\nPlease download these files and place them under the 'results/' directory within the project folder.\n\n- 'URL': CIFAR-100 dataset.\n\nPlease download this file and place it under the 'data/' directory within the project folder.",
"## Installation and Usage\n\nSee URL for more details."
] |
[
"TAGS\n#license-mit #region-us \n",
"# Classifiers Enhanced by Pre-training\n\nThis project utilizes a visual encoder from the pre-trained CLIP (ViT-B/32) to build image classifiers. To use the trained models, follow the steps below to set up and run the classifiers.",
"## Prerequisites\n\nBefore you start, make sure you have Python and the necessary libraries installed.",
"## Download the Trained Models and CIFAR-100 Dataset\n\nYou need to download the following trained model weights and CIFAR-100 dataset for running the project:\n- 'URL': Best model weights after fine-tuning.\n- 'URL': Best model weights after the linear probe training.\n- 'URL': Best model weights trained from scratch.\n\nPlease download these files and place them under the 'results/' directory within the project folder.\n\n- 'URL': CIFAR-100 dataset.\n\nPlease download this file and place it under the 'data/' directory within the project folder.",
"## Installation and Usage\n\nSee URL for more details."
] |
reinforcement-learning
|
stable-baselines3
|
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "249.01 +/- 21.72", "name": "mean_reward", "verified": false}]}]}]}
|
wgouyang/ppo-LunarLander-v2
| null |
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null |
2024-04-16T02:44:40+00:00
|
[] |
[] |
TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
|
[
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
[
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
abhayesian/BobzillaV24
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T02:45:15+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# WizardLM-2-8x22B - EXL2 3.0bpw
This is a 3.0bpw EXL2 quant of [microsoft/WizardLM-2-8x22B](https://huggingface.co/microsoft/WizardLM-2-8x22B)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 7.0 | 4.5859 |
| 6.0 | 4.6252 |
| 5.5 | 4.6493 |
| 5.0 | 4.6937 |
| 4.5 | 4.8029 |
| 4.0 | 4.9372 |
| 3.5 | 5.1336 |
| 3.25 | 5.3636 |
| 3.0 | 5.5468 |
| 2.75 | 5.8255 |
| 2.5 | 6.3362 |
| 2.25 | 7.7763 |
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
DATA_SET=/root/wikitext/wikitext-2-v1.parquet
# Set the model name and bit size
MODEL_NAME="WizardLM-2-8x22B"
BIT_PRECISIONS=(6.0 5.5 5.0 4.5 4.0 3.5 3.25 3.0 2.75 2.5 2.25)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
LOCAL_FOLDER="/root/models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
REMOTE_FOLDER="Dracones/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ ! -d "$LOCAL_FOLDER" ]; then
huggingface-cli download --local-dir-use-symlinks=False --local-dir "${LOCAL_FOLDER}" "${REMOTE_FOLDER}" >> /root/download.log 2>&1
fi
output=$(python test_inference.py -m "$LOCAL_FOLDER" -gs 40,40,40,40 -ed "$DATA_SET")
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
# rm -rf "${LOCAL_FOLDER}"
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="WizardLM-2-8x22B"
# Define variables
MODEL_DIR="/mnt/storage/models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
|
{"language": ["en"], "license": "apache-2.0", "tags": ["exl2"], "base_model": "microsoft/WizardLM-2-8x22B"}
|
Dracones/WizardLM-2-8x22B_exl2_3.0bpw
| null |
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"exl2",
"en",
"base_model:microsoft/WizardLM-2-8x22B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"3-bit",
"region:us"
] | null |
2024-04-16T02:46:00+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #mixtral #text-generation #exl2 #en #base_model-microsoft/WizardLM-2-8x22B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #3-bit #region-us
|
WizardLM-2-8x22B - EXL2 3.0bpw
==============================
This is a 3.0bpw EXL2 quant of microsoft/WizardLM-2-8x22B
Details about the model can be found at the above model page.
EXL2 Version
------------
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Perplexity Scoring
------------------
Below are the perplexity scores for the EXL2 models. A lower score is better.
### Perplexity Script
This was the script used for perplexity testing.
Quant Details
-------------
This is the script used for quantization.
|
[
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] |
[
"TAGS\n#transformers #safetensors #mixtral #text-generation #exl2 #en #base_model-microsoft/WizardLM-2-8x22B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #3-bit #region-us \n",
"### Perplexity Script\n\n\nThis was the script used for perplexity testing.\n\n\nQuant Details\n-------------\n\n\nThis is the script used for quantization."
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper small nepali - Rikesh Silwal
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the slr43 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3583
- Wer: 33.7199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0061 | 9.62 | 1000 | 0.3096 | 36.4853 |
| 0.0001 | 19.23 | 2000 | 0.3306 | 34.2551 |
| 0.0 | 28.85 | 3000 | 0.3525 | 33.5712 |
| 0.0 | 38.46 | 4000 | 0.3583 | 33.7199 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"language": ["ne"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["openslr/slr43"], "metrics": ["wer"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper small nepali - Rikesh Silwal", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "slr43", "type": "openslr/slr43", "args": "config: ne, split: test"}, "metrics": [{"type": "wer", "value": 33.719892952720784, "name": "Wer"}]}]}]}
|
RikeshSilwal/whisper-small-hi-transfer-ne
| null |
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ne",
"dataset:openslr/slr43",
"base_model:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T02:46:16+00:00
|
[] |
[
"ne"
] |
TAGS
#transformers #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #ne #dataset-openslr/slr43 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
Whisper small nepali - Rikesh Silwal
====================================
This model is a fine-tuned version of openai/whisper-small on the slr43 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3583
* Wer: 33.7199
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* training\_steps: 4000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #ne #dataset-openslr/slr43 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ruBert-base-sberquad-0.005-len_3-filtered-v2
This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 7000
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "ai-forever/ruBert-base", "model-index": [{"name": "ruBert-base-sberquad-0.005-len_3-filtered-v2", "results": []}]}
|
Shalazary/ruBert-base-sberquad-0.005-len_3-filtered-v2
| null |
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:ai-forever/ruBert-base",
"license:apache-2.0",
"region:us"
] | null |
2024-04-16T02:48:39+00:00
|
[] |
[] |
TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us
|
# ruBert-base-sberquad-0.005-len_3-filtered-v2
This model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 7000
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# ruBert-base-sberquad-0.005-len_3-filtered-v2\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 7000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us \n",
"# ruBert-base-sberquad-0.005-len_3-filtered-v2\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 7000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-classification
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
Vissa15AI/fine_tuned_10012023
| null |
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T02:49:16+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
visual-question-answering
|
transformers
|
# multimodal_model
multimodal_model is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [CIDAS/clipseg-rd64-refined](https://huggingface.co/CIDAS/clipseg-rd64-refined)
* [dalle-mini/dalle-mini](https://huggingface.co/dalle-mini/dalle-mini)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: CIDAS/clipseg-rd64-refined
layer_range: [0, 32]
- model: dalle-mini/dalle-mini
layer_range: [0, 32]
merge_method: slerp
base_model: CIDAS/clipseg-rd64-refined
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "nagayama0706/multimodal_model"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "CIDAS/clipseg-rd64-refined", "dalle-mini/dalle-mini"], "base_model": ["CIDAS/clipseg-rd64-refined", "dalle-mini/dalle-mini"], "pipeline_tag": "visual-question-answering"}
|
nagayama0706/multimodal_model
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"CIDAS/clipseg-rd64-refined",
"dalle-mini/dalle-mini",
"visual-question-answering",
"base_model:CIDAS/clipseg-rd64-refined",
"base_model:dalle-mini/dalle-mini",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T02:51:09+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #CIDAS/clipseg-rd64-refined #dalle-mini/dalle-mini #visual-question-answering #base_model-CIDAS/clipseg-rd64-refined #base_model-dalle-mini/dalle-mini #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# multimodal_model
multimodal_model is a merge of the following models using LazyMergekit:
* CIDAS/clipseg-rd64-refined
* dalle-mini/dalle-mini
## Configuration
## Usage
|
[
"# multimodal_model\n\nmultimodal_model is a merge of the following models using LazyMergekit:\n* CIDAS/clipseg-rd64-refined\n* dalle-mini/dalle-mini",
"## Configuration",
"## Usage"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #CIDAS/clipseg-rd64-refined #dalle-mini/dalle-mini #visual-question-answering #base_model-CIDAS/clipseg-rd64-refined #base_model-dalle-mini/dalle-mini #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# multimodal_model\n\nmultimodal_model is a merge of the following models using LazyMergekit:\n* CIDAS/clipseg-rd64-refined\n* dalle-mini/dalle-mini",
"## Configuration",
"## Usage"
] |
text-generation
|
transformers
|
<p style="font-size:20px;" align="center">
🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
## News 🔥🔥🔥 [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our [release blog post](https://wizardlm.github.io/WizardLM2) and upcoming paper.
## Model Details
* **Model name**: WizardLM-2 7B
* **Developed by**: WizardLM@Microsoft AI
* **Base model**: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* **Parameters**: 7B
* **Language(s)**: Multilingual
* **Blog**: [Introducing WizardLM-2](https://wizardlm.github.io/WizardLM2)
* **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM)
* **Paper**: WizardLM-2 (Upcoming)
* **License**: Apache2.0
## Model Capacities
**MT-Bench**
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
**Human Preferences Evaluation**
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://wizardlm.github.io/WizardLM2) for more details of this system.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
❗<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful,
detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
USER: Who are you? ASSISTANT: I am WizardLM.</s>......
```
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
|
{"license": "apache-2.0"}
|
lucyknada/microsoft_WizardLM-2-7B
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T02:51:57+00:00
|
[
"2304.12244",
"2306.08568",
"2308.09583"
] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #conversational #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
<p style="font-size:20px;" align="center">
<a href="URL target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
<a href="URL target="_blank">HF Repo</a> • <a href="URL target="_blank">Github Repo</a> • <a href="URL target="_blank">Twitter</a> • <a href="URL target="_blank">[WizardLM]</a> • <a href="URL target="_blank">[WizardCoder]</a> • <a href="URL target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
Join our <a href="URL target="_blank">Discord</a>
</p>
## News [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our release blog post and upcoming paper.
## Model Details
* Model name: WizardLM-2 7B
* Developed by: WizardLM@Microsoft AI
* Base model: mistralai/Mistral-7B-v0.1
* Parameters: 7B
* Language(s): Multilingual
* Blog: Introducing WizardLM-2
* Repository: URL
* Paper: WizardLM-2 (Upcoming)
* License: Apache2.0
## Model Capacities
MT-Bench
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="URL alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
Human Preferences Evaluation
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="URL alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.
<p align="center" width="100%">
<a ><img src="URL alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo code on our github.
|
[
"## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.",
"## Model Details\n\n* Model name: WizardLM-2 7B\n* Developed by: WizardLM@Microsoft AI\n* Base model: mistralai/Mistral-7B-v0.1\n* Parameters: 7B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0",
"## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github."
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.",
"## Model Details\n\n* Model name: WizardLM-2 7B\n* Developed by: WizardLM@Microsoft AI\n* Base model: mistralai/Mistral-7B-v0.1\n* Parameters: 7B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* Paper: WizardLM-2 (Upcoming)\n* License: Apache2.0",
"## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github."
] |
text-generation
|
transformers
|
This is a reupload of the fp16 safetensors that were taken down by microsoft of WizardLM-2-7b
Original Model card is bellow:
_____________________________________________________________________
license: apache-2.0
---
<p style="font-size:20px;" align="center">
🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
## News 🔥🔥🔥 [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our [release blog post](https://wizardlm.github.io/WizardLM2) and upcoming paper.
## Model Details
* **Model name**: WizardLM-2 7B
* **Developed by**: WizardLM@Microsoft AI
* **Base model**: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* **Parameters**: 7B
* **Language(s)**: Multilingual
* **Blog**: [Introducing WizardLM-2](https://wizardlm.github.io/WizardLM2)
* **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM)
* **
Paper**: WizardLM-2 (Upcoming)
* **License**: Apache2.0
## Model Capacities
**MT-Bench**
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
**Human Preferences Evaluation**
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://wizardlm.github.io/WizardLM2) for more details of this system.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
❗<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful,
detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
USER: Who are you? ASSISTANT: I am WizardLM.</s>......
```
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
---





|
{"license": "apache-2.0"}
|
Replete-AI/WizardLM-2-7b
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T02:52:12+00:00
|
[
"2304.12244",
"2306.08568",
"2308.09583"
] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
This is a reupload of the fp16 safetensors that were taken down by microsoft of WizardLM-2-7b
Original Model card is bellow:
_____________________________________________________________________
license: apache-2.0
---
<p style="font-size:20px;" align="center">
<a href="URL target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
<a href="URL target="_blank">HF Repo</a> • <a href="URL target="_blank">Github Repo</a> • <a href="URL target="_blank">Twitter</a> • <a href="URL target="_blank">[WizardLM]</a> • <a href="URL target="_blank">[WizardCoder]</a> • <a href="URL target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
Join our <a href="URL target="_blank">Discord</a>
</p>
## News [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our release blog post and upcoming paper.
## Model Details
* Model name: WizardLM-2 7B
* Developed by: WizardLM@Microsoft AI
* Base model: mistralai/Mistral-7B-v0.1
* Parameters: 7B
* Language(s): Multilingual
* Blog: Introducing WizardLM-2
* Repository: URL
*
Paper: WizardLM-2 (Upcoming)
* License: Apache2.0
## Model Capacities
MT-Bench
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="URL alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
Human Preferences Evaluation
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="URL alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.
<p align="center" width="100%">
<a ><img src="URL alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo code on our github.
---
!image/png
!image/png
!image/png
!image/png
!image/png
|
[
"## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.",
"## Model Details\n\n* Model name: WizardLM-2 7B\n* Developed by: WizardLM@Microsoft AI\n* Base model: mistralai/Mistral-7B-v0.1\n* Parameters: 7B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* \nPaper: WizardLM-2 (Upcoming)\n* License: Apache2.0",
"## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github.\n\n---\n\n!image/png\n\n!image/png\n\n!image/png\n\n!image/png\n\n!image/png"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-2304.12244 #arxiv-2306.08568 #arxiv-2308.09583 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## News [2024/04/15]\n\nWe introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, \nwhich have improved performance on complex chat, multilingual, reasoning and agent. \nNew family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.\n\n- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works \nand consistently outperforms all the existing state-of-the-art opensource models.\n- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. \n- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.\n\nFor more details of WizardLM-2 please read our release blog post and upcoming paper.",
"## Model Details\n\n* Model name: WizardLM-2 7B\n* Developed by: WizardLM@Microsoft AI\n* Base model: mistralai/Mistral-7B-v0.1\n* Parameters: 7B\n* Language(s): Multilingual\n* Blog: Introducing WizardLM-2\n* Repository: URL\n* \nPaper: WizardLM-2 (Upcoming)\n* License: Apache2.0",
"## Model Capacities\n\n\nMT-Bench\n\nWe also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. \nThe WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. \nMeanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"MTBench\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>\n\n\nHuman Preferences Evaluation\n\nWe carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. \nWe report the win:loss rate without tie:\n\n- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.\n- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.\n- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Win\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Method Overview\nWe built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system.\n\n<p align=\"center\" width=\"100%\">\n<a ><img src=\"URL alt=\"Method\" style=\"width: 96%; min-width: 300px; display: block; margin: auto;\"></a>\n</p>",
"## Usage\n\n<b>Note for model system prompts usage:</b>\n\n\n<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports multi-turn conversation. The prompt should be as following:\n\n\n\n<b> Inference WizardLM-2 Demo Script</b>\n\nWe provide a WizardLM-2 inference demo code on our github.\n\n---\n\n!image/png\n\n!image/png\n\n!image/png\n\n!image/png\n\n!image/png"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_4-seqsight_8192_512_17M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_17M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_17M) on the [mahdibaghbanzadeh/GUE_mouse_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6371
- F1 Score: 0.6765
- Accuracy: 0.6766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6129 | 25.0 | 200 | 0.5808 | 0.6854 | 0.6872 |
| 0.5595 | 50.0 | 400 | 0.5694 | 0.6957 | 0.6973 |
| 0.5351 | 75.0 | 600 | 0.5560 | 0.7148 | 0.7148 |
| 0.516 | 100.0 | 800 | 0.5644 | 0.7131 | 0.7132 |
| 0.4948 | 125.0 | 1000 | 0.5783 | 0.7163 | 0.7164 |
| 0.4728 | 150.0 | 1200 | 0.5800 | 0.7154 | 0.7153 |
| 0.4502 | 175.0 | 1400 | 0.6093 | 0.6956 | 0.6962 |
| 0.4273 | 200.0 | 1600 | 0.6304 | 0.7026 | 0.7026 |
| 0.4038 | 225.0 | 1800 | 0.6517 | 0.6982 | 0.6984 |
| 0.3809 | 250.0 | 2000 | 0.6756 | 0.7032 | 0.7031 |
| 0.3604 | 275.0 | 2200 | 0.6981 | 0.6965 | 0.6968 |
| 0.3392 | 300.0 | 2400 | 0.7120 | 0.7006 | 0.7005 |
| 0.3212 | 325.0 | 2600 | 0.7268 | 0.7038 | 0.7037 |
| 0.3035 | 350.0 | 2800 | 0.7548 | 0.7036 | 0.7037 |
| 0.2869 | 375.0 | 3000 | 0.7643 | 0.6989 | 0.6989 |
| 0.2724 | 400.0 | 3200 | 0.7998 | 0.7020 | 0.7021 |
| 0.2582 | 425.0 | 3400 | 0.8173 | 0.7006 | 0.7005 |
| 0.2463 | 450.0 | 3600 | 0.8337 | 0.6979 | 0.6978 |
| 0.2359 | 475.0 | 3800 | 0.8392 | 0.6947 | 0.6946 |
| 0.2254 | 500.0 | 4000 | 0.8964 | 0.6957 | 0.6957 |
| 0.2156 | 525.0 | 4200 | 0.8804 | 0.6995 | 0.6994 |
| 0.2064 | 550.0 | 4400 | 0.9188 | 0.7021 | 0.7021 |
| 0.1998 | 575.0 | 4600 | 0.9170 | 0.6947 | 0.6946 |
| 0.1937 | 600.0 | 4800 | 0.9380 | 0.7011 | 0.7010 |
| 0.1871 | 625.0 | 5000 | 0.9506 | 0.7016 | 0.7015 |
| 0.1808 | 650.0 | 5200 | 0.9607 | 0.7032 | 0.7031 |
| 0.1744 | 675.0 | 5400 | 0.9773 | 0.7022 | 0.7021 |
| 0.1695 | 700.0 | 5600 | 0.9994 | 0.7032 | 0.7031 |
| 0.1649 | 725.0 | 5800 | 0.9892 | 0.7069 | 0.7069 |
| 0.1605 | 750.0 | 6000 | 1.0234 | 0.7027 | 0.7026 |
| 0.1569 | 775.0 | 6200 | 1.0388 | 0.7059 | 0.7058 |
| 0.153 | 800.0 | 6400 | 1.0447 | 0.7048 | 0.7047 |
| 0.1478 | 825.0 | 6600 | 1.0544 | 0.7074 | 0.7074 |
| 0.1467 | 850.0 | 6800 | 1.0638 | 0.7042 | 0.7042 |
| 0.1432 | 875.0 | 7000 | 1.0631 | 0.6979 | 0.6978 |
| 0.1417 | 900.0 | 7200 | 1.0587 | 0.7043 | 0.7042 |
| 0.1383 | 925.0 | 7400 | 1.0634 | 0.7091 | 0.7090 |
| 0.136 | 950.0 | 7600 | 1.0832 | 0.7042 | 0.7042 |
| 0.1338 | 975.0 | 7800 | 1.0962 | 0.7038 | 0.7037 |
| 0.1312 | 1000.0 | 8000 | 1.1061 | 0.7022 | 0.7021 |
| 0.1302 | 1025.0 | 8200 | 1.1149 | 0.7064 | 0.7063 |
| 0.1283 | 1050.0 | 8400 | 1.1085 | 0.7064 | 0.7063 |
| 0.1251 | 1075.0 | 8600 | 1.1276 | 0.7059 | 0.7058 |
| 0.1265 | 1100.0 | 8800 | 1.1244 | 0.7022 | 0.7021 |
| 0.1254 | 1125.0 | 9000 | 1.1199 | 0.7070 | 0.7069 |
| 0.1247 | 1150.0 | 9200 | 1.1202 | 0.7022 | 0.7021 |
| 0.1222 | 1175.0 | 9400 | 1.1309 | 0.7070 | 0.7069 |
| 0.1229 | 1200.0 | 9600 | 1.1208 | 0.7070 | 0.7069 |
| 0.121 | 1225.0 | 9800 | 1.1283 | 0.7070 | 0.7069 |
| 0.122 | 1250.0 | 10000 | 1.1299 | 0.7075 | 0.7074 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_17M", "model-index": [{"name": "GUE_mouse_4-seqsight_8192_512_17M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_mouse_4-seqsight_8192_512_17M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_17M",
"region:us"
] | null |
2024-04-16T02:52:48+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us
|
GUE\_mouse\_4-seqsight\_8192\_512\_17M-L32\_all
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_17M on the mahdibaghbanzadeh/GUE\_mouse\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6371
* F1 Score: 0.6765
* Accuracy: 0.6766
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
OwOOwO/dumbo-krillin9
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T02:57:28+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-to-image
|
transformers
|
# image_generation_model
image_generation_model is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [sentence-transformers/clip-ViT-B-32-multilingual-v1](https://huggingface.co/sentence-transformers/clip-ViT-B-32-multilingual-v1)
* [dalle-mini/dalle-mini](https://huggingface.co/dalle-mini/dalle-mini)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: sentence-transformers/clip-ViT-B-32-multilingual-v1
layer_range: [0, 32]
- model: dalle-mini/dalle-mini
layer_range: [0, 32]
merge_method: slerp
base_model: sentence-transformers/clip-ViT-B-32-multilingual-v1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "nagayama0706/image_generation_model"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "sentence-transformers/clip-ViT-B-32-multilingual-v1", "dalle-mini/dalle-mini"], "base_model": ["sentence-transformers/clip-ViT-B-32-multilingual-v1", "dalle-mini/dalle-mini"], "pipeline_tag": "text-to-image"}
|
nagayama0706/image_generation_model
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"sentence-transformers/clip-ViT-B-32-multilingual-v1",
"dalle-mini/dalle-mini",
"text-to-image",
"base_model:sentence-transformers/clip-ViT-B-32-multilingual-v1",
"base_model:dalle-mini/dalle-mini",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T03:00:57+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #sentence-transformers/clip-ViT-B-32-multilingual-v1 #dalle-mini/dalle-mini #text-to-image #base_model-sentence-transformers/clip-ViT-B-32-multilingual-v1 #base_model-dalle-mini/dalle-mini #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# image_generation_model
image_generation_model is a merge of the following models using LazyMergekit:
* sentence-transformers/clip-ViT-B-32-multilingual-v1
* dalle-mini/dalle-mini
## Configuration
## Usage
|
[
"# image_generation_model\n\nimage_generation_model is a merge of the following models using LazyMergekit:\n* sentence-transformers/clip-ViT-B-32-multilingual-v1\n* dalle-mini/dalle-mini",
"## Configuration",
"## Usage"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #sentence-transformers/clip-ViT-B-32-multilingual-v1 #dalle-mini/dalle-mini #text-to-image #base_model-sentence-transformers/clip-ViT-B-32-multilingual-v1 #base_model-dalle-mini/dalle-mini #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# image_generation_model\n\nimage_generation_model is a merge of the following models using LazyMergekit:\n* sentence-transformers/clip-ViT-B-32-multilingual-v1\n* dalle-mini/dalle-mini",
"## Configuration",
"## Usage"
] |
token-classification
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
AwesomeREK/concept-extraction-xlnet-early-stopping-p2p-self-trained
| null |
[
"transformers",
"safetensors",
"xlnet",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T03:04:12+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #xlnet #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #xlnet #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Mixtral-8x7B-v0.1-japanese
Mixtral-8x7B-v0.1-japaneseは[Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)をベースに日本語の語彙拡張継続事前学習を実施したモデルです。
詳細は[ABEJAのテックブログ](https://tech-blog.abeja.asia/entry/abeja-nedo-project-part1-202404)を参照してください。
学習を実施したMetagton-LMのレポジトリは[こちら](https://github.com/abeja-inc/Megatron-LM)です。
# 使い方
``` python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "abeja/Mixtral-8x7B-v0.1-japanese"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
use_cache=True,
device_map="auto",
)
model.eval()
input_text = """# system
誠実で紳士的で優秀なAIアシスタントとして、簡潔でわかりやすく役に立つ回答を自信をもって答えなさい。
# question
人とAIが協調するためには?
# answer"""
input_ids = tokenizer.encode(input_text, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
input_ids.to(model.device),
max_new_tokens=256,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
output = tokenizer.decode(output_ids.tolist()[0], skip_special_tokens=True)
print(output)
```
# 開発者
- Keisuke Fujimoto
- Kentaro Nakanishi
- Kyo Hattori
- Shinya Otani
- Shogo Muranushi
(*)アルファベット順
|
{"language": ["ja"], "license": "apache-2.0", "widget": [{"text": "\u4eba\u3068AI\u304c\u5354\u8abf\u3059\u308b\u305f\u3081\u306b\u306f\u3001"}]}
|
abeja/Mixtral-8x7B-v0.1-japanese
| null |
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T03:06:14+00:00
|
[] |
[
"ja"
] |
TAGS
#transformers #safetensors #mixtral #text-generation #ja #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Mixtral-8x7B-v0.1-japanese
Mixtral-8x7B-v0.1-japaneseはMixtral-8x7B-v0.1をベースに日本語の語彙拡張継続事前学習を実施したモデルです。
詳細はABEJAのテックブログを参照してください。
学習を実施したMetagton-LMのレポジトリはこちらです。
# 使い方
# 開発者
- Keisuke Fujimoto
- Kentaro Nakanishi
- Kyo Hattori
- Shinya Otani
- Shogo Muranushi
(*)アルファベット順
|
[
"# Mixtral-8x7B-v0.1-japanese\n\nMixtral-8x7B-v0.1-japaneseはMixtral-8x7B-v0.1をベースに日本語の語彙拡張継続事前学習を実施したモデルです。 \n詳細はABEJAのテックブログを参照してください。 \n学習を実施したMetagton-LMのレポジトリはこちらです。",
"# 使い方",
"# 開発者\n- Keisuke Fujimoto\n- Kentaro Nakanishi\n- Kyo Hattori\n- Shinya Otani\n- Shogo Muranushi \n(*)アルファベット順"
] |
[
"TAGS\n#transformers #safetensors #mixtral #text-generation #ja #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Mixtral-8x7B-v0.1-japanese\n\nMixtral-8x7B-v0.1-japaneseはMixtral-8x7B-v0.1をベースに日本語の語彙拡張継続事前学習を実施したモデルです。 \n詳細はABEJAのテックブログを参照してください。 \n学習を実施したMetagton-LMのレポジトリはこちらです。",
"# 使い方",
"# 開発者\n- Keisuke Fujimoto\n- Kentaro Nakanishi\n- Kyo Hattori\n- Shinya Otani\n- Shogo Muranushi \n(*)アルファベット順"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
arthurLi920/my_diffusion.dream_booth_unet
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T03:07:03+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pijarcandra22/NMTBaliIndoT5
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0455
- Validation Loss: 2.2245
- Epoch: 499
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.0057 | 2.3883 | 0 |
| 2.4646 | 2.1171 | 1 |
| 2.2509 | 1.9641 | 2 |
| 2.1002 | 1.8352 | 3 |
| 1.9809 | 1.7476 | 4 |
| 1.8787 | 1.6777 | 5 |
| 1.7996 | 1.6172 | 6 |
| 1.7378 | 1.5669 | 7 |
| 1.6695 | 1.5305 | 8 |
| 1.6190 | 1.4909 | 9 |
| 1.5707 | 1.4619 | 10 |
| 1.5296 | 1.4280 | 11 |
| 1.4855 | 1.4013 | 12 |
| 1.4541 | 1.3778 | 13 |
| 1.4139 | 1.3560 | 14 |
| 1.3809 | 1.3410 | 15 |
| 1.3536 | 1.3156 | 16 |
| 1.3255 | 1.3029 | 17 |
| 1.2994 | 1.2946 | 18 |
| 1.2748 | 1.2796 | 19 |
| 1.2497 | 1.2659 | 20 |
| 1.2214 | 1.2633 | 21 |
| 1.2042 | 1.2480 | 22 |
| 1.1865 | 1.2341 | 23 |
| 1.1632 | 1.2291 | 24 |
| 1.1486 | 1.2238 | 25 |
| 1.1279 | 1.2102 | 26 |
| 1.1108 | 1.2092 | 27 |
| 1.0973 | 1.2033 | 28 |
| 1.0793 | 1.1981 | 29 |
| 1.0650 | 1.1952 | 30 |
| 1.0491 | 1.1866 | 31 |
| 1.0324 | 1.1817 | 32 |
| 1.0192 | 1.1826 | 33 |
| 0.9999 | 1.1824 | 34 |
| 0.9935 | 1.1791 | 35 |
| 0.9786 | 1.1704 | 36 |
| 0.9648 | 1.1692 | 37 |
| 0.9496 | 1.1653 | 38 |
| 0.9397 | 1.1667 | 39 |
| 0.9295 | 1.1598 | 40 |
| 0.9186 | 1.1623 | 41 |
| 0.9061 | 1.1609 | 42 |
| 0.8900 | 1.1576 | 43 |
| 0.8813 | 1.1623 | 44 |
| 0.8659 | 1.1559 | 45 |
| 0.8592 | 1.1610 | 46 |
| 0.8505 | 1.1600 | 47 |
| 0.8385 | 1.1565 | 48 |
| 0.8273 | 1.1641 | 49 |
| 0.8207 | 1.1624 | 50 |
| 0.8047 | 1.1596 | 51 |
| 0.8019 | 1.1547 | 52 |
| 0.7903 | 1.1609 | 53 |
| 0.7812 | 1.1614 | 54 |
| 0.7721 | 1.1524 | 55 |
| 0.7625 | 1.1628 | 56 |
| 0.7532 | 1.1659 | 57 |
| 0.7466 | 1.1653 | 58 |
| 0.7368 | 1.1666 | 59 |
| 0.7248 | 1.1738 | 60 |
| 0.7210 | 1.1712 | 61 |
| 0.7103 | 1.1770 | 62 |
| 0.7018 | 1.1743 | 63 |
| 0.6949 | 1.1783 | 64 |
| 0.6848 | 1.1828 | 65 |
| 0.6786 | 1.1822 | 66 |
| 0.6702 | 1.1876 | 67 |
| 0.6599 | 1.1957 | 68 |
| 0.6561 | 1.1961 | 69 |
| 0.6502 | 1.1933 | 70 |
| 0.6381 | 1.1980 | 71 |
| 0.6323 | 1.2030 | 72 |
| 0.6254 | 1.2119 | 73 |
| 0.6169 | 1.2142 | 74 |
| 0.6094 | 1.2083 | 75 |
| 0.6060 | 1.2068 | 76 |
| 0.6002 | 1.2247 | 77 |
| 0.5907 | 1.2285 | 78 |
| 0.5811 | 1.2294 | 79 |
| 0.5777 | 1.2293 | 80 |
| 0.5729 | 1.2290 | 81 |
| 0.5625 | 1.2358 | 82 |
| 0.5575 | 1.2479 | 83 |
| 0.5527 | 1.2427 | 84 |
| 0.5454 | 1.2489 | 85 |
| 0.5372 | 1.2542 | 86 |
| 0.5337 | 1.2600 | 87 |
| 0.5241 | 1.2670 | 88 |
| 0.5221 | 1.2696 | 89 |
| 0.5177 | 1.2719 | 90 |
| 0.5106 | 1.2769 | 91 |
| 0.5041 | 1.2771 | 92 |
| 0.4958 | 1.2870 | 93 |
| 0.4896 | 1.2907 | 94 |
| 0.4849 | 1.2894 | 95 |
| 0.4788 | 1.3095 | 96 |
| 0.4745 | 1.3199 | 97 |
| 0.4703 | 1.3117 | 98 |
| 0.4630 | 1.3169 | 99 |
| 0.4574 | 1.3172 | 100 |
| 0.4548 | 1.3263 | 101 |
| 0.4503 | 1.3333 | 102 |
| 0.4455 | 1.3304 | 103 |
| 0.4390 | 1.3364 | 104 |
| 0.4331 | 1.3508 | 105 |
| 0.4277 | 1.3411 | 106 |
| 0.4225 | 1.3521 | 107 |
| 0.4174 | 1.3610 | 108 |
| 0.4140 | 1.3560 | 109 |
| 0.4084 | 1.3737 | 110 |
| 0.4029 | 1.3741 | 111 |
| 0.4000 | 1.3822 | 112 |
| 0.3956 | 1.3859 | 113 |
| 0.3876 | 1.4035 | 114 |
| 0.3873 | 1.4108 | 115 |
| 0.3766 | 1.3996 | 116 |
| 0.3773 | 1.4035 | 117 |
| 0.3734 | 1.4129 | 118 |
| 0.3669 | 1.4219 | 119 |
| 0.3622 | 1.4210 | 120 |
| 0.3612 | 1.4192 | 121 |
| 0.3563 | 1.4289 | 122 |
| 0.3532 | 1.4450 | 123 |
| 0.3463 | 1.4463 | 124 |
| 0.3426 | 1.4515 | 125 |
| 0.3392 | 1.4652 | 126 |
| 0.3334 | 1.4602 | 127 |
| 0.3320 | 1.4642 | 128 |
| 0.3268 | 1.4667 | 129 |
| 0.3240 | 1.4796 | 130 |
| 0.3202 | 1.4793 | 131 |
| 0.3160 | 1.4897 | 132 |
| 0.3147 | 1.4883 | 133 |
| 0.3093 | 1.4900 | 134 |
| 0.3056 | 1.5097 | 135 |
| 0.3048 | 1.5073 | 136 |
| 0.3020 | 1.5091 | 137 |
| 0.2974 | 1.5087 | 138 |
| 0.2910 | 1.5308 | 139 |
| 0.2888 | 1.5318 | 140 |
| 0.2854 | 1.5434 | 141 |
| 0.2827 | 1.5454 | 142 |
| 0.2812 | 1.5463 | 143 |
| 0.2767 | 1.5516 | 144 |
| 0.2734 | 1.5527 | 145 |
| 0.2693 | 1.5590 | 146 |
| 0.2669 | 1.5727 | 147 |
| 0.2636 | 1.5765 | 148 |
| 0.2638 | 1.5748 | 149 |
| 0.2605 | 1.5942 | 150 |
| 0.2569 | 1.5878 | 151 |
| 0.2525 | 1.6007 | 152 |
| 0.2495 | 1.5954 | 153 |
| 0.2476 | 1.6063 | 154 |
| 0.2466 | 1.6182 | 155 |
| 0.2399 | 1.6249 | 156 |
| 0.2377 | 1.6177 | 157 |
| 0.2377 | 1.6197 | 158 |
| 0.2351 | 1.6209 | 159 |
| 0.2302 | 1.6320 | 160 |
| 0.2294 | 1.6396 | 161 |
| 0.2247 | 1.6485 | 162 |
| 0.2249 | 1.6542 | 163 |
| 0.2213 | 1.6508 | 164 |
| 0.2182 | 1.6581 | 165 |
| 0.2177 | 1.6640 | 166 |
| 0.2146 | 1.6758 | 167 |
| 0.2123 | 1.6765 | 168 |
| 0.2117 | 1.6838 | 169 |
| 0.2083 | 1.6785 | 170 |
| 0.2069 | 1.6967 | 171 |
| 0.2023 | 1.6948 | 172 |
| 0.1998 | 1.7009 | 173 |
| 0.1990 | 1.7082 | 174 |
| 0.1969 | 1.7074 | 175 |
| 0.1947 | 1.7101 | 176 |
| 0.1932 | 1.7155 | 177 |
| 0.1913 | 1.7187 | 178 |
| 0.1901 | 1.7305 | 179 |
| 0.1872 | 1.7407 | 180 |
| 0.1874 | 1.7371 | 181 |
| 0.1886 | 1.7379 | 182 |
| 0.1831 | 1.7476 | 183 |
| 0.1827 | 1.7467 | 184 |
| 0.1779 | 1.7536 | 185 |
| 0.1767 | 1.7554 | 186 |
| 0.1752 | 1.7647 | 187 |
| 0.1726 | 1.7648 | 188 |
| 0.1711 | 1.7744 | 189 |
| 0.1707 | 1.7667 | 190 |
| 0.1657 | 1.7909 | 191 |
| 0.1662 | 1.7837 | 192 |
| 0.1643 | 1.7871 | 193 |
| 0.1640 | 1.7876 | 194 |
| 0.1614 | 1.8020 | 195 |
| 0.1615 | 1.7982 | 196 |
| 0.1572 | 1.8096 | 197 |
| 0.1575 | 1.8112 | 198 |
| 0.1556 | 1.8249 | 199 |
| 0.1530 | 1.8180 | 200 |
| 0.1519 | 1.8243 | 201 |
| 0.1532 | 1.8174 | 202 |
| 0.1512 | 1.8278 | 203 |
| 0.1488 | 1.8331 | 204 |
| 0.1465 | 1.8437 | 205 |
| 0.1458 | 1.8439 | 206 |
| 0.1470 | 1.8363 | 207 |
| 0.1444 | 1.8396 | 208 |
| 0.1419 | 1.8571 | 209 |
| 0.1403 | 1.8577 | 210 |
| 0.1417 | 1.8495 | 211 |
| 0.1414 | 1.8475 | 212 |
| 0.1399 | 1.8680 | 213 |
| 0.1367 | 1.8644 | 214 |
| 0.1363 | 1.8738 | 215 |
| 0.1350 | 1.8667 | 216 |
| 0.1314 | 1.8698 | 217 |
| 0.1329 | 1.8806 | 218 |
| 0.1315 | 1.8782 | 219 |
| 0.1318 | 1.8778 | 220 |
| 0.1283 | 1.8790 | 221 |
| 0.1277 | 1.8937 | 222 |
| 0.1254 | 1.8924 | 223 |
| 0.1249 | 1.8962 | 224 |
| 0.1266 | 1.8913 | 225 |
| 0.1232 | 1.9012 | 226 |
| 0.1229 | 1.8963 | 227 |
| 0.1222 | 1.8979 | 228 |
| 0.1201 | 1.9140 | 229 |
| 0.1206 | 1.9087 | 230 |
| 0.1203 | 1.8971 | 231 |
| 0.1178 | 1.9294 | 232 |
| 0.1177 | 1.9287 | 233 |
| 0.1178 | 1.9271 | 234 |
| 0.1173 | 1.9292 | 235 |
| 0.1167 | 1.9276 | 236 |
| 0.1165 | 1.9266 | 237 |
| 0.1131 | 1.9263 | 238 |
| 0.1129 | 1.9241 | 239 |
| 0.1108 | 1.9346 | 240 |
| 0.1112 | 1.9506 | 241 |
| 0.1099 | 1.9488 | 242 |
| 0.1093 | 1.9362 | 243 |
| 0.1099 | 1.9409 | 244 |
| 0.1098 | 1.9370 | 245 |
| 0.1070 | 1.9454 | 246 |
| 0.1072 | 1.9498 | 247 |
| 0.1060 | 1.9508 | 248 |
| 0.1055 | 1.9529 | 249 |
| 0.1055 | 1.9637 | 250 |
| 0.1025 | 1.9580 | 251 |
| 0.1043 | 1.9663 | 252 |
| 0.1027 | 1.9708 | 253 |
| 0.1023 | 1.9658 | 254 |
| 0.1014 | 1.9815 | 255 |
| 0.1011 | 1.9739 | 256 |
| 0.0996 | 1.9742 | 257 |
| 0.0996 | 1.9828 | 258 |
| 0.0990 | 1.9763 | 259 |
| 0.0982 | 1.9805 | 260 |
| 0.0977 | 1.9908 | 261 |
| 0.0966 | 1.9738 | 262 |
| 0.0972 | 1.9763 | 263 |
| 0.0958 | 1.9766 | 264 |
| 0.0961 | 1.9863 | 265 |
| 0.0957 | 1.9877 | 266 |
| 0.0943 | 1.9820 | 267 |
| 0.0938 | 1.9967 | 268 |
| 0.0933 | 2.0096 | 269 |
| 0.0950 | 1.9914 | 270 |
| 0.0909 | 1.9910 | 271 |
| 0.0924 | 2.0045 | 272 |
| 0.0913 | 2.0063 | 273 |
| 0.0903 | 2.0011 | 274 |
| 0.0910 | 1.9991 | 275 |
| 0.0897 | 2.0035 | 276 |
| 0.0894 | 2.0074 | 277 |
| 0.0863 | 2.0188 | 278 |
| 0.0895 | 2.0141 | 279 |
| 0.0871 | 2.0231 | 280 |
| 0.0871 | 2.0101 | 281 |
| 0.0861 | 2.0031 | 282 |
| 0.0858 | 2.0285 | 283 |
| 0.0869 | 2.0226 | 284 |
| 0.0849 | 2.0267 | 285 |
| 0.0852 | 2.0179 | 286 |
| 0.0844 | 2.0336 | 287 |
| 0.0856 | 2.0277 | 288 |
| 0.0843 | 2.0256 | 289 |
| 0.0850 | 2.0255 | 290 |
| 0.0833 | 2.0227 | 291 |
| 0.0824 | 2.0334 | 292 |
| 0.0816 | 2.0261 | 293 |
| 0.0827 | 2.0364 | 294 |
| 0.0829 | 2.0292 | 295 |
| 0.0820 | 2.0219 | 296 |
| 0.0807 | 2.0318 | 297 |
| 0.0806 | 2.0230 | 298 |
| 0.0800 | 2.0360 | 299 |
| 0.0784 | 2.0483 | 300 |
| 0.0782 | 2.0374 | 301 |
| 0.0792 | 2.0430 | 302 |
| 0.0794 | 2.0399 | 303 |
| 0.0789 | 2.0536 | 304 |
| 0.0764 | 2.0584 | 305 |
| 0.0776 | 2.0456 | 306 |
| 0.0760 | 2.0432 | 307 |
| 0.0762 | 2.0609 | 308 |
| 0.0777 | 2.0608 | 309 |
| 0.0762 | 2.0609 | 310 |
| 0.0752 | 2.0525 | 311 |
| 0.0758 | 2.0568 | 312 |
| 0.0771 | 2.0524 | 313 |
| 0.0748 | 2.0522 | 314 |
| 0.0755 | 2.0505 | 315 |
| 0.0742 | 2.0459 | 316 |
| 0.0748 | 2.0528 | 317 |
| 0.0735 | 2.0612 | 318 |
| 0.0727 | 2.0561 | 319 |
| 0.0725 | 2.0676 | 320 |
| 0.0730 | 2.0725 | 321 |
| 0.0724 | 2.0638 | 322 |
| 0.0728 | 2.0584 | 323 |
| 0.0712 | 2.0773 | 324 |
| 0.0720 | 2.0709 | 325 |
| 0.0712 | 2.0729 | 326 |
| 0.0698 | 2.0753 | 327 |
| 0.0699 | 2.0705 | 328 |
| 0.0705 | 2.0701 | 329 |
| 0.0706 | 2.0762 | 330 |
| 0.0699 | 2.0718 | 331 |
| 0.0690 | 2.0798 | 332 |
| 0.0682 | 2.0872 | 333 |
| 0.0689 | 2.0809 | 334 |
| 0.0683 | 2.0749 | 335 |
| 0.0688 | 2.0851 | 336 |
| 0.0682 | 2.0854 | 337 |
| 0.0676 | 2.0818 | 338 |
| 0.0679 | 2.0810 | 339 |
| 0.0671 | 2.0885 | 340 |
| 0.0666 | 2.0887 | 341 |
| 0.0669 | 2.0854 | 342 |
| 0.0673 | 2.0927 | 343 |
| 0.0666 | 2.0821 | 344 |
| 0.0657 | 2.0998 | 345 |
| 0.0663 | 2.1133 | 346 |
| 0.0665 | 2.0853 | 347 |
| 0.0655 | 2.1038 | 348 |
| 0.0652 | 2.1013 | 349 |
| 0.0651 | 2.0905 | 350 |
| 0.0658 | 2.1061 | 351 |
| 0.0649 | 2.0931 | 352 |
| 0.0658 | 2.1027 | 353 |
| 0.0654 | 2.1045 | 354 |
| 0.0649 | 2.0973 | 355 |
| 0.0651 | 2.1105 | 356 |
| 0.0633 | 2.1159 | 357 |
| 0.0634 | 2.1088 | 358 |
| 0.0625 | 2.1325 | 359 |
| 0.0629 | 2.1245 | 360 |
| 0.0621 | 2.1334 | 361 |
| 0.0629 | 2.1150 | 362 |
| 0.0643 | 2.0974 | 363 |
| 0.0624 | 2.1102 | 364 |
| 0.0628 | 2.1239 | 365 |
| 0.0624 | 2.1142 | 366 |
| 0.0612 | 2.1373 | 367 |
| 0.0622 | 2.1213 | 368 |
| 0.0623 | 2.1062 | 369 |
| 0.0611 | 2.1195 | 370 |
| 0.0609 | 2.1172 | 371 |
| 0.0605 | 2.1256 | 372 |
| 0.0617 | 2.1373 | 373 |
| 0.0605 | 2.1289 | 374 |
| 0.0601 | 2.1241 | 375 |
| 0.0598 | 2.1250 | 376 |
| 0.0599 | 2.1308 | 377 |
| 0.0610 | 2.1231 | 378 |
| 0.0608 | 2.1316 | 379 |
| 0.0596 | 2.1307 | 380 |
| 0.0597 | 2.1267 | 381 |
| 0.0587 | 2.1341 | 382 |
| 0.0587 | 2.1314 | 383 |
| 0.0593 | 2.1290 | 384 |
| 0.0592 | 2.1239 | 385 |
| 0.0570 | 2.1267 | 386 |
| 0.0595 | 2.1282 | 387 |
| 0.0586 | 2.1326 | 388 |
| 0.0590 | 2.1332 | 389 |
| 0.0583 | 2.1316 | 390 |
| 0.0576 | 2.1392 | 391 |
| 0.0594 | 2.1280 | 392 |
| 0.0575 | 2.1357 | 393 |
| 0.0567 | 2.1392 | 394 |
| 0.0566 | 2.1370 | 395 |
| 0.0571 | 2.1186 | 396 |
| 0.0561 | 2.1400 | 397 |
| 0.0567 | 2.1312 | 398 |
| 0.0571 | 2.1440 | 399 |
| 0.0568 | 2.1485 | 400 |
| 0.0561 | 2.1539 | 401 |
| 0.0563 | 2.1461 | 402 |
| 0.0565 | 2.1496 | 403 |
| 0.0554 | 2.1622 | 404 |
| 0.0561 | 2.1580 | 405 |
| 0.0553 | 2.1723 | 406 |
| 0.0560 | 2.1498 | 407 |
| 0.0555 | 2.1546 | 408 |
| 0.0552 | 2.1622 | 409 |
| 0.0549 | 2.1548 | 410 |
| 0.0548 | 2.1613 | 411 |
| 0.0546 | 2.1655 | 412 |
| 0.0540 | 2.1661 | 413 |
| 0.0549 | 2.1710 | 414 |
| 0.0543 | 2.1760 | 415 |
| 0.0543 | 2.1648 | 416 |
| 0.0538 | 2.1800 | 417 |
| 0.0524 | 2.1824 | 418 |
| 0.0528 | 2.1849 | 419 |
| 0.0531 | 2.1668 | 420 |
| 0.0548 | 2.1598 | 421 |
| 0.0543 | 2.1624 | 422 |
| 0.0533 | 2.1705 | 423 |
| 0.0539 | 2.1821 | 424 |
| 0.0531 | 2.1629 | 425 |
| 0.0537 | 2.1704 | 426 |
| 0.0529 | 2.1687 | 427 |
| 0.0525 | 2.1990 | 428 |
| 0.0518 | 2.1939 | 429 |
| 0.0522 | 2.1761 | 430 |
| 0.0521 | 2.1725 | 431 |
| 0.0521 | 2.1677 | 432 |
| 0.0517 | 2.1731 | 433 |
| 0.0512 | 2.1833 | 434 |
| 0.0514 | 2.1914 | 435 |
| 0.0522 | 2.1858 | 436 |
| 0.0513 | 2.1854 | 437 |
| 0.0517 | 2.1875 | 438 |
| 0.0513 | 2.2028 | 439 |
| 0.0518 | 2.2001 | 440 |
| 0.0510 | 2.1821 | 441 |
| 0.0508 | 2.1831 | 442 |
| 0.0507 | 2.1787 | 443 |
| 0.0512 | 2.1773 | 444 |
| 0.0505 | 2.1962 | 445 |
| 0.0507 | 2.1756 | 446 |
| 0.0507 | 2.1885 | 447 |
| 0.0500 | 2.1993 | 448 |
| 0.0505 | 2.1738 | 449 |
| 0.0511 | 2.1672 | 450 |
| 0.0486 | 2.1973 | 451 |
| 0.0500 | 2.1826 | 452 |
| 0.0513 | 2.1787 | 453 |
| 0.0502 | 2.1902 | 454 |
| 0.0501 | 2.1805 | 455 |
| 0.0494 | 2.1814 | 456 |
| 0.0499 | 2.1808 | 457 |
| 0.0496 | 2.1744 | 458 |
| 0.0498 | 2.1721 | 459 |
| 0.0493 | 2.1922 | 460 |
| 0.0499 | 2.1888 | 461 |
| 0.0497 | 2.1897 | 462 |
| 0.0497 | 2.1876 | 463 |
| 0.0489 | 2.1910 | 464 |
| 0.0481 | 2.1933 | 465 |
| 0.0497 | 2.1821 | 466 |
| 0.0494 | 2.1943 | 467 |
| 0.0489 | 2.1991 | 468 |
| 0.0482 | 2.1978 | 469 |
| 0.0485 | 2.1813 | 470 |
| 0.0483 | 2.1804 | 471 |
| 0.0480 | 2.1988 | 472 |
| 0.0483 | 2.1996 | 473 |
| 0.0477 | 2.1996 | 474 |
| 0.0475 | 2.1978 | 475 |
| 0.0483 | 2.1811 | 476 |
| 0.0470 | 2.1921 | 477 |
| 0.0478 | 2.1978 | 478 |
| 0.0471 | 2.1900 | 479 |
| 0.0484 | 2.2167 | 480 |
| 0.0474 | 2.1919 | 481 |
| 0.0475 | 2.2082 | 482 |
| 0.0466 | 2.2219 | 483 |
| 0.0476 | 2.1836 | 484 |
| 0.0465 | 2.2060 | 485 |
| 0.0473 | 2.2154 | 486 |
| 0.0475 | 2.2080 | 487 |
| 0.0464 | 2.2102 | 488 |
| 0.0465 | 2.2156 | 489 |
| 0.0475 | 2.2129 | 490 |
| 0.0463 | 2.2031 | 491 |
| 0.0459 | 2.2007 | 492 |
| 0.0466 | 2.2033 | 493 |
| 0.0462 | 2.2144 | 494 |
| 0.0461 | 2.2208 | 495 |
| 0.0462 | 2.2257 | 496 |
| 0.0463 | 2.2060 | 497 |
| 0.0458 | 2.2229 | 498 |
| 0.0455 | 2.2245 | 499 |
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "t5-small", "model-index": [{"name": "pijarcandra22/NMTBaliIndoT5", "results": []}]}
|
pijarcandra22/NMTBaliIndoT5
| null |
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T03:08:48+00:00
|
[] |
[] |
TAGS
#transformers #tf #t5 #text2text-generation #generated_from_keras_callback #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
pijarcandra22/NMTBaliIndoT5
===========================
This model is a fine-tuned version of t5-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.0455
* Validation Loss: 2.2245
* Epoch: 499
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': 1e-04, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\_decay\_rate': 0.01}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.38.2
* TensorFlow 2.15.0
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': 1e-04, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tf #t5 #text2text-generation #generated_from_keras_callback #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': 1e-04, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
JinbiaoZhu/gemma-2b-robotplanning-v2
| null |
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T03:09:02+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_3-seqsight_8192_512_17M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_17M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_17M) on the [mahdibaghbanzadeh/GUE_mouse_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1631
- F1 Score: 0.8742
- Accuracy: 0.8745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 1536
- eval_batch_size: 1536
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4488 | 100.0 | 200 | 0.4663 | 0.8199 | 0.8201 |
| 0.208 | 200.0 | 400 | 0.6218 | 0.8159 | 0.8159 |
| 0.1177 | 300.0 | 600 | 0.8011 | 0.7782 | 0.7782 |
| 0.0761 | 400.0 | 800 | 0.8413 | 0.8115 | 0.8117 |
| 0.053 | 500.0 | 1000 | 0.9733 | 0.7991 | 0.7992 |
| 0.0398 | 600.0 | 1200 | 1.0036 | 0.8157 | 0.8159 |
| 0.0335 | 700.0 | 1400 | 1.0470 | 0.8075 | 0.8075 |
| 0.0282 | 800.0 | 1600 | 1.1209 | 0.7908 | 0.7908 |
| 0.0231 | 900.0 | 1800 | 1.1637 | 0.8159 | 0.8159 |
| 0.0196 | 1000.0 | 2000 | 1.2133 | 0.8033 | 0.8033 |
| 0.0161 | 1100.0 | 2200 | 1.2364 | 0.8159 | 0.8159 |
| 0.0148 | 1200.0 | 2400 | 1.2124 | 0.8032 | 0.8033 |
| 0.0154 | 1300.0 | 2600 | 1.2008 | 0.8033 | 0.8033 |
| 0.0122 | 1400.0 | 2800 | 1.2411 | 0.7992 | 0.7992 |
| 0.0103 | 1500.0 | 3000 | 1.3349 | 0.8116 | 0.8117 |
| 0.0119 | 1600.0 | 3200 | 1.2696 | 0.8033 | 0.8033 |
| 0.0093 | 1700.0 | 3400 | 1.3035 | 0.8117 | 0.8117 |
| 0.0073 | 1800.0 | 3600 | 1.4007 | 0.8075 | 0.8075 |
| 0.0083 | 1900.0 | 3800 | 1.3624 | 0.8033 | 0.8033 |
| 0.0077 | 2000.0 | 4000 | 1.3760 | 0.7989 | 0.7992 |
| 0.007 | 2100.0 | 4200 | 1.4112 | 0.8075 | 0.8075 |
| 0.007 | 2200.0 | 4400 | 1.3917 | 0.8075 | 0.8075 |
| 0.006 | 2300.0 | 4600 | 1.3986 | 0.8159 | 0.8159 |
| 0.0049 | 2400.0 | 4800 | 1.4965 | 0.7991 | 0.7992 |
| 0.0056 | 2500.0 | 5000 | 1.3747 | 0.8033 | 0.8033 |
| 0.0047 | 2600.0 | 5200 | 1.4688 | 0.8117 | 0.8117 |
| 0.0048 | 2700.0 | 5400 | 1.3709 | 0.8117 | 0.8117 |
| 0.0047 | 2800.0 | 5600 | 1.3879 | 0.8284 | 0.8285 |
| 0.0048 | 2900.0 | 5800 | 1.4648 | 0.8075 | 0.8075 |
| 0.0036 | 3000.0 | 6000 | 1.4342 | 0.8159 | 0.8159 |
| 0.0039 | 3100.0 | 6200 | 1.4502 | 0.8201 | 0.8201 |
| 0.0036 | 3200.0 | 6400 | 1.4629 | 0.8159 | 0.8159 |
| 0.0033 | 3300.0 | 6600 | 1.4644 | 0.8201 | 0.8201 |
| 0.0037 | 3400.0 | 6800 | 1.4447 | 0.8159 | 0.8159 |
| 0.0031 | 3500.0 | 7000 | 1.4561 | 0.8201 | 0.8201 |
| 0.0032 | 3600.0 | 7200 | 1.4291 | 0.8158 | 0.8159 |
| 0.0027 | 3700.0 | 7400 | 1.4629 | 0.8201 | 0.8201 |
| 0.003 | 3800.0 | 7600 | 1.4856 | 0.8159 | 0.8159 |
| 0.0035 | 3900.0 | 7800 | 1.4169 | 0.8159 | 0.8159 |
| 0.0027 | 4000.0 | 8000 | 1.4571 | 0.8201 | 0.8201 |
| 0.0026 | 4100.0 | 8200 | 1.5154 | 0.8075 | 0.8075 |
| 0.0025 | 4200.0 | 8400 | 1.5243 | 0.8159 | 0.8159 |
| 0.0026 | 4300.0 | 8600 | 1.4927 | 0.8159 | 0.8159 |
| 0.0022 | 4400.0 | 8800 | 1.4992 | 0.8117 | 0.8117 |
| 0.002 | 4500.0 | 9000 | 1.5349 | 0.8117 | 0.8117 |
| 0.0023 | 4600.0 | 9200 | 1.5306 | 0.8117 | 0.8117 |
| 0.0024 | 4700.0 | 9400 | 1.5543 | 0.8117 | 0.8117 |
| 0.0021 | 4800.0 | 9600 | 1.5321 | 0.8075 | 0.8075 |
| 0.0021 | 4900.0 | 9800 | 1.5424 | 0.8117 | 0.8117 |
| 0.0021 | 5000.0 | 10000 | 1.5430 | 0.8117 | 0.8117 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_17M", "model-index": [{"name": "GUE_mouse_3-seqsight_8192_512_17M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_mouse_3-seqsight_8192_512_17M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_17M",
"region:us"
] | null |
2024-04-16T03:09:21+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us
|
GUE\_mouse\_3-seqsight\_8192\_512\_17M-L32\_all
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_17M on the mahdibaghbanzadeh/GUE\_mouse\_3 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1631
* F1 Score: 0.8742
* Accuracy: 0.8745
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 1536
* eval\_batch\_size: 1536
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dialogsum_6593_bart-base
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3997
- Rouge1: 0.4226
- Rouge2: 0.2139
- Rougel: 0.3698
- Rougelsum: 0.37
- Gen Len: 19.86
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.3756 | 2.57 | 500 | 0.4106 | 0.4083 | 0.19 | 0.3536 | 0.3534 | 19.944 |
| 0.2843 | 5.14 | 1000 | 0.4048 | 0.422 | 0.2134 | 0.3678 | 0.3681 | 19.848 |
| 0.2561 | 7.7 | 1500 | 0.3997 | 0.4226 | 0.2139 | 0.3698 | 0.37 | 19.86 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "facebook/bart-base", "model-index": [{"name": "dialogsum_6593_bart-base", "results": []}]}
|
baek26/bart-dialogsum
| null |
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T03:12:22+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
dialogsum\_6593\_bart-base
==========================
This model is a fine-tuned version of facebook/bart-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3997
* Rouge1: 0.4226
* Rouge2: 0.2139
* Rougel: 0.3698
* Rougelsum: 0.37
* Gen Len: 19.86
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.0.0+cu117
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.0.0+cu117\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.0.0+cu117\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
transformers
|
# Untitled LoRA Model (1)
This is a LoRA extracted from a language model. It was extracted using [mergekit](https://github.com/arcee-ai/mergekit).
## LoRA Details
This LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.
### Parameters
The following command was used to extract this LoRA adapter:
```sh
mergekit-extract-lora /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ OUTPUT_PATH --rank=8
```
|
{"library_name": "transformers", "tags": ["mergekit", "peft"], "base_model": []}
|
thomasgauthier/OpenHermes-2.5-Mistral-7B-LoRA-extraction-r8
| null |
[
"transformers",
"safetensors",
"mergekit",
"peft",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T03:12:23+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mergekit #peft #endpoints_compatible #region-us
|
# Untitled LoRA Model (1)
This is a LoRA extracted from a language model. It was extracted using mergekit.
## LoRA Details
This LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.
### Parameters
The following command was used to extract this LoRA adapter:
|
[
"# Untitled LoRA Model (1)\n\nThis is a LoRA extracted from a language model. It was extracted using mergekit.",
"## LoRA Details\n\nThis LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.",
"### Parameters\n\nThe following command was used to extract this LoRA adapter:"
] |
[
"TAGS\n#transformers #safetensors #mergekit #peft #endpoints_compatible #region-us \n",
"# Untitled LoRA Model (1)\n\nThis is a LoRA extracted from a language model. It was extracted using mergekit.",
"## LoRA Details\n\nThis LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.",
"### Parameters\n\nThe following command was used to extract this LoRA adapter:"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_hh_shp2_dpo5
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6851
- Rewards/chosen: -2.9461
- Rewards/rejected: -3.5981
- Rewards/accuracies: 0.5100
- Rewards/margins: 0.6520
- Logps/rejected: -231.1039
- Logps/chosen: -240.5692
- Logits/rejected: -1.0755
- Logits/chosen: -1.0796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.0775 | 2.67 | 100 | 1.5649 | -1.7873 | -2.0584 | 0.5400 | 0.2711 | -228.0247 | -238.2517 | -0.8122 | -0.8281 |
| 0.0049 | 5.33 | 200 | 1.6753 | -0.0539 | -1.0216 | 0.5400 | 0.9678 | -225.9511 | -234.7848 | -1.0685 | -1.0774 |
| 0.0012 | 8.0 | 300 | 2.5069 | -2.8261 | -2.9419 | 0.4900 | 0.1158 | -229.7917 | -240.3293 | -1.1074 | -1.1154 |
| 0.0 | 10.67 | 400 | 2.8043 | -3.0513 | -3.6756 | 0.5 | 0.6243 | -231.2590 | -240.7796 | -1.0512 | -1.0557 |
| 0.0 | 13.33 | 500 | 2.7025 | -2.9535 | -3.5803 | 0.5100 | 0.6268 | -231.0683 | -240.5840 | -1.0760 | -1.0802 |
| 0.0 | 16.0 | 600 | 2.6581 | -2.9364 | -3.5947 | 0.5100 | 0.6583 | -231.0972 | -240.5500 | -1.0747 | -1.0789 |
| 0.0 | 18.67 | 700 | 2.6744 | -2.9415 | -3.6188 | 0.5200 | 0.6773 | -231.1454 | -240.5601 | -1.0743 | -1.0785 |
| 0.0 | 21.33 | 800 | 2.6936 | -2.9559 | -3.6011 | 0.5100 | 0.6452 | -231.1101 | -240.5889 | -1.0752 | -1.0793 |
| 0.0 | 24.0 | 900 | 2.6867 | -2.9162 | -3.5792 | 0.5200 | 0.6629 | -231.0661 | -240.5095 | -1.0752 | -1.0798 |
| 0.0 | 26.67 | 1000 | 2.6851 | -2.9461 | -3.5981 | 0.5100 | 0.6520 | -231.1039 | -240.5692 | -1.0755 | -1.0796 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "model_hh_shp2_dpo5", "results": []}]}
|
guoyu-zhang/model_hh_shp2_dpo5
| null |
[
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null |
2024-04-16T03:12:32+00:00
|
[] |
[] |
TAGS
#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
|
model\_hh\_shp2\_dpo5
=====================
This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.6851
* Rewards/chosen: -2.9461
* Rewards/rejected: -3.5981
* Rewards/accuracies: 0.5100
* Rewards/margins: 0.6520
* Logps/rejected: -231.1039
* Logps/chosen: -240.5692
* Logits/rejected: -1.0755
* Logits/chosen: -1.0796
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 4
* eval\_batch\_size: 1
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_steps: 100
* training\_steps: 1000
### Training results
### Framework versions
* PEFT 0.10.0
* Transformers 4.39.1
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# billsum_7999_bart-base
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4857
- Rouge1: 0.1511
- Rouge2: 0.0596
- Rougel: 0.1229
- Rougelsum: 0.1301
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.6272 | 1.69 | 500 | 2.4571 | 0.1622 | 0.071 | 0.1353 | 0.1414 | 20.0 |
| 1.3504 | 3.38 | 1000 | 2.4533 | 0.1562 | 0.0637 | 0.1272 | 0.1345 | 20.0 |
| 1.251 | 5.07 | 1500 | 2.4592 | 0.1489 | 0.0586 | 0.1215 | 0.1287 | 20.0 |
| 1.2374 | 6.75 | 2000 | 2.4967 | 0.1487 | 0.0588 | 0.1219 | 0.1286 | 20.0 |
| 1.15 | 8.44 | 2500 | 2.4857 | 0.1511 | 0.0596 | 0.1229 | 0.1301 | 20.0 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "facebook/bart-base", "model-index": [{"name": "billsum_7999_bart-base", "results": []}]}
|
baek26/bart-billsum
| null |
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T03:12:33+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
billsum\_7999\_bart-base
========================
This model is a fine-tuned version of facebook/bart-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.4857
* Rouge1: 0.1511
* Rouge2: 0.0596
* Rougel: 0.1229
* Rougelsum: 0.1301
* Gen Len: 20.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.0.0+cu117
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.0.0+cu117\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.0.0+cu117\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
transformers
|
# Untitled LoRA Model (1)
This is a LoRA extracted from a language model. It was extracted using [mergekit](https://github.com/arcee-ai/mergekit).
## LoRA Details
This LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.
### Parameters
The following command was used to extract this LoRA adapter:
```sh
mergekit-extract-lora /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ OUTPUT_PATH --rank=16
```
|
{"library_name": "transformers", "tags": ["mergekit", "peft"], "base_model": []}
|
thomasgauthier/OpenHermes-2.5-Mistral-7B-LoRA-extraction-r16
| null |
[
"transformers",
"safetensors",
"mergekit",
"peft",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T03:12:47+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mergekit #peft #endpoints_compatible #region-us
|
# Untitled LoRA Model (1)
This is a LoRA extracted from a language model. It was extracted using mergekit.
## LoRA Details
This LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.
### Parameters
The following command was used to extract this LoRA adapter:
|
[
"# Untitled LoRA Model (1)\n\nThis is a LoRA extracted from a language model. It was extracted using mergekit.",
"## LoRA Details\n\nThis LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.",
"### Parameters\n\nThe following command was used to extract this LoRA adapter:"
] |
[
"TAGS\n#transformers #safetensors #mergekit #peft #endpoints_compatible #region-us \n",
"# Untitled LoRA Model (1)\n\nThis is a LoRA extracted from a language model. It was extracted using mergekit.",
"## LoRA Details\n\nThis LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.",
"### Parameters\n\nThe following command was used to extract this LoRA adapter:"
] |
null |
transformers
|
# Untitled LoRA Model (1)
This is a LoRA extracted from a language model. It was extracted using [mergekit](https://github.com/arcee-ai/mergekit).
## LoRA Details
This LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.
### Parameters
The following command was used to extract this LoRA adapter:
```sh
mergekit-extract-lora /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ OUTPUT_PATH --rank=32
```
|
{"library_name": "transformers", "tags": ["mergekit", "peft"], "base_model": []}
|
thomasgauthier/OpenHermes-2.5-Mistral-7B-LoRA-extraction-r32
| null |
[
"transformers",
"safetensors",
"mergekit",
"peft",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T03:12:54+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mergekit #peft #endpoints_compatible #region-us
|
# Untitled LoRA Model (1)
This is a LoRA extracted from a language model. It was extracted using mergekit.
## LoRA Details
This LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.
### Parameters
The following command was used to extract this LoRA adapter:
|
[
"# Untitled LoRA Model (1)\n\nThis is a LoRA extracted from a language model. It was extracted using mergekit.",
"## LoRA Details\n\nThis LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.",
"### Parameters\n\nThe following command was used to extract this LoRA adapter:"
] |
[
"TAGS\n#transformers #safetensors #mergekit #peft #endpoints_compatible #region-us \n",
"# Untitled LoRA Model (1)\n\nThis is a LoRA extracted from a language model. It was extracted using mergekit.",
"## LoRA Details\n\nThis LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.",
"### Parameters\n\nThe following command was used to extract this LoRA adapter:"
] |
null |
transformers
|
# Untitled LoRA Model (1)
This is a LoRA extracted from a language model. It was extracted using [mergekit](https://github.com/arcee-ai/mergekit).
## LoRA Details
This LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.
### Parameters
The following command was used to extract this LoRA adapter:
```sh
mergekit-extract-lora /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ OUTPUT_PATH --rank=64 --device=cuda:0
```
|
{"library_name": "transformers", "tags": ["mergekit", "peft"], "base_model": []}
|
thomasgauthier/OpenHermes-2.5-Mistral-7B-LoRA-extraction-r64
| null |
[
"transformers",
"safetensors",
"mergekit",
"peft",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T03:13:03+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mergekit #peft #endpoints_compatible #region-us
|
# Untitled LoRA Model (1)
This is a LoRA extracted from a language model. It was extracted using mergekit.
## LoRA Details
This LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.
### Parameters
The following command was used to extract this LoRA adapter:
|
[
"# Untitled LoRA Model (1)\n\nThis is a LoRA extracted from a language model. It was extracted using mergekit.",
"## LoRA Details\n\nThis LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.",
"### Parameters\n\nThe following command was used to extract this LoRA adapter:"
] |
[
"TAGS\n#transformers #safetensors #mergekit #peft #endpoints_compatible #region-us \n",
"# Untitled LoRA Model (1)\n\nThis is a LoRA extracted from a language model. It was extracted using mergekit.",
"## LoRA Details\n\nThis LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.",
"### Parameters\n\nThe following command was used to extract this LoRA adapter:"
] |
null |
transformers
|
# Untitled LoRA Model (1)
This is a LoRA extracted from a language model. It was extracted using [mergekit](https://github.com/arcee-ai/mergekit).
## LoRA Details
This LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.
### Parameters
The following command was used to extract this LoRA adapter:
```sh
mergekit-extract-lora /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ OUTPUT_PATH --rank=128 --device=cuda:0
```
|
{"library_name": "transformers", "tags": ["mergekit", "peft"], "base_model": []}
|
thomasgauthier/OpenHermes-2.5-Mistral-7B-LoRA-extraction-r128
| null |
[
"transformers",
"safetensors",
"mergekit",
"peft",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T03:14:39+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mergekit #peft #endpoints_compatible #region-us
|
# Untitled LoRA Model (1)
This is a LoRA extracted from a language model. It was extracted using mergekit.
## LoRA Details
This LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.
### Parameters
The following command was used to extract this LoRA adapter:
|
[
"# Untitled LoRA Model (1)\n\nThis is a LoRA extracted from a language model. It was extracted using mergekit.",
"## LoRA Details\n\nThis LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.",
"### Parameters\n\nThe following command was used to extract this LoRA adapter:"
] |
[
"TAGS\n#transformers #safetensors #mergekit #peft #endpoints_compatible #region-us \n",
"# Untitled LoRA Model (1)\n\nThis is a LoRA extracted from a language model. It was extracted using mergekit.",
"## LoRA Details\n\nThis LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.",
"### Parameters\n\nThe following command was used to extract this LoRA adapter:"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
redmojo7/gemma-2b-it-finetune-palo-alto-network-auto
| null |
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T03:14:43+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
swj0419/email_STEP0000003
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T03:14:53+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
transformers
|
# Untitled LoRA Model (1)
This is a LoRA extracted from a language model. It was extracted using [mergekit](https://github.com/arcee-ai/mergekit).
## LoRA Details
This LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.
### Parameters
The following command was used to extract this LoRA adapter:
```sh
mergekit-extract-lora /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ OUTPUT_PATH --rank=256 --device=cuda:0
```
|
{"library_name": "transformers", "tags": ["mergekit", "peft"], "base_model": []}
|
thomasgauthier/OpenHermes-2.5-Mistral-7B-LoRA-extraction-r256
| null |
[
"transformers",
"safetensors",
"mergekit",
"peft",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T03:16:11+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mergekit #peft #endpoints_compatible #region-us
|
# Untitled LoRA Model (1)
This is a LoRA extracted from a language model. It was extracted using mergekit.
## LoRA Details
This LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.
### Parameters
The following command was used to extract this LoRA adapter:
|
[
"# Untitled LoRA Model (1)\n\nThis is a LoRA extracted from a language model. It was extracted using mergekit.",
"## LoRA Details\n\nThis LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.",
"### Parameters\n\nThe following command was used to extract this LoRA adapter:"
] |
[
"TAGS\n#transformers #safetensors #mergekit #peft #endpoints_compatible #region-us \n",
"# Untitled LoRA Model (1)\n\nThis is a LoRA extracted from a language model. It was extracted using mergekit.",
"## LoRA Details\n\nThis LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.",
"### Parameters\n\nThe following command was used to extract this LoRA adapter:"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
azib/phi-2
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T03:18:28+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
transformers
|
# Untitled LoRA Model (1)
This is a LoRA extracted from a language model. It was extracted using [mergekit](https://github.com/arcee-ai/mergekit).
## LoRA Details
This LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.
### Parameters
The following command was used to extract this LoRA adapter:
```sh
mergekit-extract-lora /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ OUTPUT_PATH --rank=512 --device=cuda:0
```
|
{"library_name": "transformers", "tags": ["mergekit", "peft"], "base_model": []}
|
thomasgauthier/OpenHermes-2.5-Mistral-7B-LoRA-extraction-r512
| null |
[
"transformers",
"safetensors",
"mergekit",
"peft",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T03:19:28+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mergekit #peft #endpoints_compatible #region-us
|
# Untitled LoRA Model (1)
This is a LoRA extracted from a language model. It was extracted using mergekit.
## LoRA Details
This LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.
### Parameters
The following command was used to extract this LoRA adapter:
|
[
"# Untitled LoRA Model (1)\n\nThis is a LoRA extracted from a language model. It was extracted using mergekit.",
"## LoRA Details\n\nThis LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.",
"### Parameters\n\nThe following command was used to extract this LoRA adapter:"
] |
[
"TAGS\n#transformers #safetensors #mergekit #peft #endpoints_compatible #region-us \n",
"# Untitled LoRA Model (1)\n\nThis is a LoRA extracted from a language model. It was extracted using mergekit.",
"## LoRA Details\n\nThis LoRA adapter was extracted from /workspace/models/teknium_OpenHermes-2.5-Mistral-7B/ and uses /workspace/models/thomasgauthier_Mistral-7B-v0.1-zeroed_out-ChatML-base/ as a base.",
"### Parameters\n\nThe following command was used to extract this LoRA adapter:"
] |
translation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Gopal-finetuned-custom-en-to-it
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-it](https://huggingface.co/Helsinki-NLP/opus-mt-en-it) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["translation", "generated_from_trainer"], "base_model": "Helsinki-NLP/opus-mt-en-it", "model-index": [{"name": "Gopal-finetuned-custom-en-to-it", "results": []}]}
|
Gopal1853/Gopal-finetuned-custom-en-to-it
| null |
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T03:19:45+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #marian #text2text-generation #translation #generated_from_trainer #base_model-Helsinki-NLP/opus-mt-en-it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Gopal-finetuned-custom-en-to-it
This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-it on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# Gopal-finetuned-custom-en-to-it\n\nThis model is a fine-tuned version of Helsinki-NLP/opus-mt-en-it on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 64\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 100\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #marian #text2text-generation #translation #generated_from_trainer #base_model-Helsinki-NLP/opus-mt-en-it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Gopal-finetuned-custom-en-to-it\n\nThis model is a fine-tuned version of Helsinki-NLP/opus-mt-en-it on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 64\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 100\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-to-image
|
diffusers
|
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual inversion text2image fine-tuning - nemod/textual_inversion_cat_toy_paper
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "textual_inversion", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "textual_inversion", "diffusers-training"], "base_model": "runwayml/stable-diffusion-v1-5", "inference": true}
|
nemod/textual_inversion_cat_toy_paper
| null |
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"diffusers-training",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null |
2024-04-16T03:21:27+00:00
|
[] |
[] |
TAGS
#diffusers #tensorboard #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #textual_inversion #diffusers-training #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# Textual inversion text2image fine-tuning - nemod/textual_inversion_cat_toy_paper
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
[
"# Textual inversion text2image fine-tuning - nemod/textual_inversion_cat_toy_paper\nThese are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
[
"TAGS\n#diffusers #tensorboard #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #textual_inversion #diffusers-training #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# Textual inversion text2image fine-tuning - nemod/textual_inversion_cat_toy_paper\nThese are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
swj0419/email_STEP0000006
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T03:25:00+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Aura v2

The second version of the Aura line is a direct improvement over the original. Expect poetic and eloquent outputs with real emotion behind them.
I recommend keeping the temperature around 1.5 or lower with a Min P value of 0.05. This model can get carried away with prose at higher temperature. I will say though that the prose of this model is distinct from the GPT 3.5/4 variant, and lends an air of humanity to the outputs. I am aware that this model is overfit, but that was the point of the entire exercise.
If you have trouble getting the model to follow an asterisks/quote format, I recommend asterisks/plaintext instead. This model skews toward shorter outputs, so be prepared to lengthen your introduction and examples if you want longer outputs.
This model responds best to ChatML for multiturn conversations.
This model, like all other Mistral based models, is compatible with a Mistral compatible mmproj file for multimodal vision capabilities in KoboldCPP.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ResplendentAI__Aura_v2_7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.36|
|AI2 Reasoning Challenge (25-Shot)|73.46|
|HellaSwag (10-Shot) |88.64|
|MMLU (5-Shot) |63.97|
|TruthfulQA (0-shot) |75.17|
|Winogrande (5-shot) |84.45|
|GSM8k (5-shot) |66.49|
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": ["ResplendentAI/Paradigm_7B", "jeiku/Theory_of_Mind_Mistral", "ResplendentAI/Paradigm_7B", "jeiku/selfbot_256_mistral", "ResplendentAI/Paradigm_7B", "jeiku/Gnosis_Reformatted_Mistral", "ResplendentAI/Paradigm_7B"], "model-index": [{"name": "Aura_v2_7B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 73.46, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Aura_v2_7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 88.64, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Aura_v2_7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 63.97, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Aura_v2_7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 75.17}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Aura_v2_7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 84.45, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Aura_v2_7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 66.49, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Aura_v2_7B", "name": "Open LLM Leaderboard"}}]}]}
|
ResplendentAI/Aura_v2_7B
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"base_model:ResplendentAI/Paradigm_7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T03:25:06+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #mistral #text-generation #en #base_model-ResplendentAI/Paradigm_7B #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Aura v2
=======
!image/png
The second version of the Aura line is a direct improvement over the original. Expect poetic and eloquent outputs with real emotion behind them.
I recommend keeping the temperature around 1.5 or lower with a Min P value of 0.05. This model can get carried away with prose at higher temperature. I will say though that the prose of this model is distinct from the GPT 3.5/4 variant, and lends an air of humanity to the outputs. I am aware that this model is overfit, but that was the point of the entire exercise.
If you have trouble getting the model to follow an asterisks/quote format, I recommend asterisks/plaintext instead. This model skews toward shorter outputs, so be prepared to lengthen your introduction and examples if you want longer outputs.
This model responds best to ChatML for multiturn conversations.
This model, like all other Mistral based models, is compatible with a Mistral compatible mmproj file for multimodal vision capabilities in KoboldCPP.
Open LLM Leaderboard Evaluation Results
=======================================
Detailed results can be found here
|
[] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #en #base_model-ResplendentAI/Paradigm_7B #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
# Description
[alexlangshur/WizardLM-2-7B-AWQ](https://huggingface.co/alexlangshur/WizardLM-2-7B-AWQ) is a version of [microsoft/WizardLM-2-7B](https://huggingface.co/microsoft/WizardLM-2-7B) that has been quantized with 4-bit AWQ.
## Setup
### Installation
```
pip install -U accelerate autoawq transformers
```
### Inference
Below is the Python code to perform local inference on the AWQ model. Note that you must have a GPU available on your machine for this to work.
```python
from transformers import AutoTokenizer
from awq import AutoAWQForCausalLM
model_name = "alexlangshur/WizardLM-2-7B-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
model = AutoAWQForCausalLM.from_quantized(model_name, fuse_layers=True, safetensors=True).cuda()
text = "The meaning of life is"
inputs = tokenizer(text, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, do_sample=True, temperature=0.5, pad_token_id=tokenizer.eos_token_id, max_new_tokens=1024)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
And here is the output:
```
The meaning of life is a philosophical question concerning the significance of existence or consciousness. People have different perspectives based on religious, philosophical, and individual beliefs.
In the context of the universe, the question of life's meaning is often intertwined with the question of why the universe exists and what its purpose, if any, might be. This question has been addressed by many cultures, philosophies, and religions, each offering its own answers and frameworks for understanding the significance of life.
Different perspectives on the meaning of life:
1. **Religious Views**: Many religions provide an answer to the meaning of life, often tied to the will or design of a deity or deities. For example:
- **Christianity** often speaks of a life lived in service to God and others, culminating in eternal life with God.
- **Islam** emphasizes living a life in accordance with the will of Allah and striving for a balance in life (the Middle Path).
- **Judaism** focuses on the covenant between God and the Jewish people, with an emphasis on living a life that reflects the values and commandments of the Torah.
- **Hinduism** speaks of the cycle of life, death, and rebirth (samsara), with the ultimate goal being moksha, or liberation from this cycle.
2. **Philosophical Views**: Philosophers have proposed many different perspectives on the meaning of life, including:
- **Existentialism** posits that life has no inherent meaning, and it is up to each individual to create their own meaning through their choices and actions.
- **Utilitarianism** suggests that the meaning of life is to maximize happiness and reduce suffering.
- **Stoicism** teaches that a meaningful life is one lived with virtue and reason, accepting what cannot be changed and focusing on what can.
- **Nihilism** asserts that life is without objective meaning, purpose, or intrinsic value.
3. **Scientific Views**: From a scientific standpoint, life is a product of evolution by natural selection, and its meaning is often understood in terms of survival and reproduction. However, some scientists and thinkers extend this view to suggest that life's meaning could be to explore, understand, and perhaps transcend the universe.
4. **Personal Views**: Many people find meaning in life through personal fulfillment, relationships, achievements, and the pursuit of knowledge and personal growth.
5. **Cultural Views**: Different cultures have their own narratives and traditions that shape their members' understanding of the meaning of life.
6. **Humanistic Views**: Humanism emphasizes the value and agency of human beings, individually and as a collective, and suggests that the meaning of life is to seek fulfillment and to contribute to the betterment of humanity.
7. **Absurdist Views**: The Absurd is a concept in existentialist philosophy, referring to the conflict between the human tendency to seek inherent value and meaning in life and the inability to find any, because the universe does not inherently have a purpose.
The question of the meaning of life is deeply personal and can be influenced by a myriad of factors, including one's cultural background, personal experiences, and philosophical inclinations. It remains one of the most profound and enduring questions that humans continue to explore and debate.
```
|
{"language": ["en"], "license": "apache-2.0", "tags": ["finetuned", "quantized", "4-bit", "AWQ", "transformers", "safetensors", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us"], "model_name": "WizardLM-2-7B-AWQ", "base_model": "microsoft/WizardLM-2-7B", "inference": true, "model_creator": "microsoft", "pipeline_tag": "text-generation", "quantized_by": "alexlangshur"}
|
alexlangshur/WizardLM-2-7B-AWQ
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetuned",
"quantized",
"4-bit",
"AWQ",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"en",
"base_model:microsoft/WizardLM-2-7B"
] | null |
2024-04-16T03:25:45+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #mistral #text-generation #finetuned #quantized #4-bit #AWQ #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us #en #base_model-microsoft/WizardLM-2-7B
|
# Description
alexlangshur/WizardLM-2-7B-AWQ is a version of microsoft/WizardLM-2-7B that has been quantized with 4-bit AWQ.
## Setup
### Installation
### Inference
Below is the Python code to perform local inference on the AWQ model. Note that you must have a GPU available on your machine for this to work.
And here is the output:
|
[
"# Description\n\nalexlangshur/WizardLM-2-7B-AWQ is a version of microsoft/WizardLM-2-7B that has been quantized with 4-bit AWQ.",
"## Setup",
"### Installation",
"### Inference\n\nBelow is the Python code to perform local inference on the AWQ model. Note that you must have a GPU available on your machine for this to work.\n\n\n\nAnd here is the output:"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #finetuned #quantized #4-bit #AWQ #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us #en #base_model-microsoft/WizardLM-2-7B \n",
"# Description\n\nalexlangshur/WizardLM-2-7B-AWQ is a version of microsoft/WizardLM-2-7B that has been quantized with 4-bit AWQ.",
"## Setup",
"### Installation",
"### Inference\n\nBelow is the Python code to perform local inference on the AWQ model. Note that you must have a GPU available on your machine for this to work.\n\n\n\nAnd here is the output:"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ruBert-base-sberquad-0.005-len_3-filtered-negative-v2
This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 7000
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "ai-forever/ruBert-base", "model-index": [{"name": "ruBert-base-sberquad-0.005-len_3-filtered-negative-v2", "results": []}]}
|
Shalazary/ruBert-base-sberquad-0.005-len_3-filtered-negative-v2
| null |
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:ai-forever/ruBert-base",
"license:apache-2.0",
"region:us"
] | null |
2024-04-16T03:25:56+00:00
|
[] |
[] |
TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us
|
# ruBert-base-sberquad-0.005-len_3-filtered-negative-v2
This model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 7000
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# ruBert-base-sberquad-0.005-len_3-filtered-negative-v2\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 7000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us \n",
"# ruBert-base-sberquad-0.005-len_3-filtered-negative-v2\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 7000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_2-seqsight_8192_512_17M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_17M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_17M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8386
- F1 Score: 0.8597
- Accuracy: 0.8598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.2994 | 100.0 | 200 | 0.2909 | 0.8808 | 0.8811 |
| 0.1365 | 200.0 | 400 | 0.3582 | 0.8779 | 0.8780 |
| 0.0806 | 300.0 | 600 | 0.4496 | 0.8871 | 0.8872 |
| 0.0537 | 400.0 | 800 | 0.5449 | 0.8778 | 0.8780 |
| 0.0364 | 500.0 | 1000 | 0.5822 | 0.8932 | 0.8933 |
| 0.0286 | 600.0 | 1200 | 0.5831 | 0.8932 | 0.8933 |
| 0.0215 | 700.0 | 1400 | 0.6231 | 0.8901 | 0.8902 |
| 0.0178 | 800.0 | 1600 | 0.6652 | 0.8901 | 0.8902 |
| 0.0137 | 900.0 | 1800 | 0.6735 | 0.8840 | 0.8841 |
| 0.0119 | 1000.0 | 2000 | 0.6597 | 0.8871 | 0.8872 |
| 0.0111 | 1100.0 | 2200 | 0.6623 | 0.8993 | 0.8994 |
| 0.0095 | 1200.0 | 2400 | 0.6673 | 0.8963 | 0.8963 |
| 0.0084 | 1300.0 | 2600 | 0.7273 | 0.8902 | 0.8902 |
| 0.008 | 1400.0 | 2800 | 0.6951 | 0.8993 | 0.8994 |
| 0.0064 | 1500.0 | 3000 | 0.7167 | 0.8993 | 0.8994 |
| 0.0063 | 1600.0 | 3200 | 0.7543 | 0.9055 | 0.9055 |
| 0.0059 | 1700.0 | 3400 | 0.7030 | 0.9055 | 0.9055 |
| 0.0052 | 1800.0 | 3600 | 0.7492 | 0.9024 | 0.9024 |
| 0.0045 | 1900.0 | 3800 | 0.7030 | 0.9055 | 0.9055 |
| 0.0045 | 2000.0 | 4000 | 0.7129 | 0.9055 | 0.9055 |
| 0.0042 | 2100.0 | 4200 | 0.8001 | 0.8963 | 0.8963 |
| 0.0037 | 2200.0 | 4400 | 0.7613 | 0.8932 | 0.8933 |
| 0.0037 | 2300.0 | 4600 | 0.7909 | 0.9054 | 0.9055 |
| 0.0033 | 2400.0 | 4800 | 0.7462 | 0.9024 | 0.9024 |
| 0.003 | 2500.0 | 5000 | 0.7531 | 0.9085 | 0.9085 |
| 0.0033 | 2600.0 | 5200 | 0.7623 | 0.8963 | 0.8963 |
| 0.0025 | 2700.0 | 5400 | 0.7428 | 0.9146 | 0.9146 |
| 0.0026 | 2800.0 | 5600 | 0.7679 | 0.8963 | 0.8963 |
| 0.0022 | 2900.0 | 5800 | 0.8340 | 0.9055 | 0.9055 |
| 0.0023 | 3000.0 | 6000 | 0.8434 | 0.8994 | 0.8994 |
| 0.0024 | 3100.0 | 6200 | 0.8402 | 0.8994 | 0.8994 |
| 0.0024 | 3200.0 | 6400 | 0.8382 | 0.9055 | 0.9055 |
| 0.0021 | 3300.0 | 6600 | 0.7979 | 0.9055 | 0.9055 |
| 0.0017 | 3400.0 | 6800 | 0.8379 | 0.9024 | 0.9024 |
| 0.0019 | 3500.0 | 7000 | 0.7866 | 0.9024 | 0.9024 |
| 0.0017 | 3600.0 | 7200 | 0.9065 | 0.8932 | 0.8933 |
| 0.0018 | 3700.0 | 7400 | 0.8341 | 0.9055 | 0.9055 |
| 0.0014 | 3800.0 | 7600 | 0.8920 | 0.8933 | 0.8933 |
| 0.0018 | 3900.0 | 7800 | 0.8925 | 0.8963 | 0.8963 |
| 0.0014 | 4000.0 | 8000 | 0.8705 | 0.8963 | 0.8963 |
| 0.0013 | 4100.0 | 8200 | 0.8723 | 0.8993 | 0.8994 |
| 0.0015 | 4200.0 | 8400 | 0.8334 | 0.9055 | 0.9055 |
| 0.0013 | 4300.0 | 8600 | 0.8220 | 0.9116 | 0.9116 |
| 0.0014 | 4400.0 | 8800 | 0.8262 | 0.9024 | 0.9024 |
| 0.0011 | 4500.0 | 9000 | 0.8509 | 0.9024 | 0.9024 |
| 0.0013 | 4600.0 | 9200 | 0.8719 | 0.8994 | 0.8994 |
| 0.0011 | 4700.0 | 9400 | 0.8639 | 0.8994 | 0.8994 |
| 0.0011 | 4800.0 | 9600 | 0.8510 | 0.9055 | 0.9055 |
| 0.0009 | 4900.0 | 9800 | 0.8718 | 0.8932 | 0.8933 |
| 0.0011 | 5000.0 | 10000 | 0.8669 | 0.8994 | 0.8994 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_17M", "model-index": [{"name": "GUE_mouse_2-seqsight_8192_512_17M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_mouse_2-seqsight_8192_512_17M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_17M",
"region:us"
] | null |
2024-04-16T03:28:15+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us
|
GUE\_mouse\_2-seqsight\_8192\_512\_17M-L32\_all
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_17M on the mahdibaghbanzadeh/GUE\_mouse\_2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8386
* F1 Score: 0.8597
* Accuracy: 0.8598
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
weqweasdas/raft_baseline_zephyr_packing_model6_1_4_e6
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T03:28:54+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_splice_reconstructed-seqsight_8192_512_17M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_17M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_17M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3863
- F1 Score: 0.8893
- Accuracy: 0.8889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.7291 | 11.11 | 200 | 0.4855 | 0.7922 | 0.7911 |
| 0.4377 | 22.22 | 400 | 0.4262 | 0.8319 | 0.8310 |
| 0.379 | 33.33 | 600 | 0.3956 | 0.8474 | 0.8466 |
| 0.3493 | 44.44 | 800 | 0.3800 | 0.8580 | 0.8573 |
| 0.3287 | 55.56 | 1000 | 0.3680 | 0.8571 | 0.8564 |
| 0.3128 | 66.67 | 1200 | 0.3707 | 0.8621 | 0.8615 |
| 0.2977 | 77.78 | 1400 | 0.3637 | 0.8637 | 0.8630 |
| 0.2858 | 88.89 | 1600 | 0.3536 | 0.8704 | 0.8698 |
| 0.2751 | 100.0 | 1800 | 0.3407 | 0.8751 | 0.8746 |
| 0.2657 | 111.11 | 2000 | 0.3503 | 0.8730 | 0.8724 |
| 0.2566 | 122.22 | 2200 | 0.3542 | 0.8752 | 0.8746 |
| 0.2473 | 133.33 | 2400 | 0.3394 | 0.8807 | 0.8801 |
| 0.2402 | 144.44 | 2600 | 0.3478 | 0.8794 | 0.8788 |
| 0.2311 | 155.56 | 2800 | 0.3355 | 0.8847 | 0.8843 |
| 0.2252 | 166.67 | 3000 | 0.3616 | 0.8741 | 0.8735 |
| 0.2185 | 177.78 | 3200 | 0.3380 | 0.8854 | 0.8849 |
| 0.2131 | 188.89 | 3400 | 0.3472 | 0.8817 | 0.8812 |
| 0.2077 | 200.0 | 3600 | 0.3438 | 0.8828 | 0.8823 |
| 0.202 | 211.11 | 3800 | 0.3464 | 0.8830 | 0.8825 |
| 0.1965 | 222.22 | 4000 | 0.3523 | 0.8820 | 0.8814 |
| 0.1912 | 233.33 | 4200 | 0.3602 | 0.8807 | 0.8801 |
| 0.1867 | 244.44 | 4400 | 0.3542 | 0.8830 | 0.8825 |
| 0.1827 | 255.56 | 4600 | 0.3687 | 0.8804 | 0.8799 |
| 0.1791 | 266.67 | 4800 | 0.3514 | 0.8858 | 0.8854 |
| 0.1748 | 277.78 | 5000 | 0.3498 | 0.8867 | 0.8862 |
| 0.1712 | 288.89 | 5200 | 0.3637 | 0.8839 | 0.8834 |
| 0.1684 | 300.0 | 5400 | 0.3609 | 0.8848 | 0.8843 |
| 0.1669 | 311.11 | 5600 | 0.3644 | 0.8841 | 0.8836 |
| 0.1636 | 322.22 | 5800 | 0.3601 | 0.8882 | 0.8878 |
| 0.1587 | 333.33 | 6000 | 0.3829 | 0.8842 | 0.8836 |
| 0.1577 | 344.44 | 6200 | 0.3714 | 0.8846 | 0.8840 |
| 0.1533 | 355.56 | 6400 | 0.3740 | 0.8863 | 0.8858 |
| 0.1522 | 366.67 | 6600 | 0.3757 | 0.8839 | 0.8834 |
| 0.1501 | 377.78 | 6800 | 0.3837 | 0.8857 | 0.8851 |
| 0.1483 | 388.89 | 7000 | 0.3841 | 0.8850 | 0.8845 |
| 0.1466 | 400.0 | 7200 | 0.3810 | 0.8839 | 0.8834 |
| 0.1454 | 411.11 | 7400 | 0.3973 | 0.8837 | 0.8832 |
| 0.1427 | 422.22 | 7600 | 0.3869 | 0.8846 | 0.8840 |
| 0.1415 | 433.33 | 7800 | 0.3746 | 0.8880 | 0.8875 |
| 0.1401 | 444.44 | 8000 | 0.3869 | 0.8863 | 0.8858 |
| 0.1387 | 455.56 | 8200 | 0.3874 | 0.8850 | 0.8845 |
| 0.1379 | 466.67 | 8400 | 0.3843 | 0.8856 | 0.8851 |
| 0.1353 | 477.78 | 8600 | 0.3916 | 0.8852 | 0.8847 |
| 0.1353 | 488.89 | 8800 | 0.3944 | 0.8852 | 0.8847 |
| 0.1339 | 500.0 | 9000 | 0.3868 | 0.8876 | 0.8871 |
| 0.133 | 511.11 | 9200 | 0.3940 | 0.8887 | 0.8882 |
| 0.1343 | 522.22 | 9400 | 0.3945 | 0.8850 | 0.8845 |
| 0.1335 | 533.33 | 9600 | 0.3932 | 0.8854 | 0.8849 |
| 0.1319 | 544.44 | 9800 | 0.3944 | 0.8870 | 0.8865 |
| 0.1321 | 555.56 | 10000 | 0.3965 | 0.8868 | 0.8862 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_17M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_8192_512_17M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_8192_512_17M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_17M",
"region:us"
] | null |
2024-04-16T03:29:29+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us
|
GUE\_splice\_reconstructed-seqsight\_8192\_512\_17M-L32\_all
============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_17M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3863
* F1 Score: 0.8893
* Accuracy: 0.8889
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-to-image
|
diffusers
|
## Dark-Sushi-Mix-2.25D
<img src="" alt="Generated on Image Pipeline" style="border-radius: 10px;">
**This checkpoint model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**
Model details -
[](https://imagepipeline.io/models/Dark-Sushi-Mix-2.25D?id=7f62f711-cfcd-482f-9e27-abbd61a4d6bd/)
## How to try this model ?
You can try using it locally or send an API call to test the output quality.
Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required.
Coding in `php` `javascript` `node` etc ? Checkout our documentation
[](https://docs.imagepipeline.io/docs/introduction)
```python
import requests
import json
url = "https://imagepipeline.io/sd/text2image/v1/run"
payload = json.dumps({
"model_id": "7f62f711-cfcd-482f-9e27-abbd61a4d6bd",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": false,
"guidance_scale": 7.5,
"multi_lingual": "no",
"embeddings": "",
"lora_models": "",
"lora_weights": ""
})
headers = {
'Content-Type': 'application/json',
'API-Key': 'your_api_key'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
}
```
Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` :
[](https://imagepipeline.io/models)
### API Reference
#### Generate Image
```http
https://api.imagepipeline.io/sd/text2image/v1
```
| Headers | Type | Description |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) |
| `Content-Type` | `str` | application/json - content type of the request body |
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
### Feedback
If you have any feedback, please reach out to us at [email protected]
#### 🔗 Visit Website
[](https://imagepipeline.io/)
If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
|
{"license": "creativeml-openrail-m", "tags": ["imagepipeline", "imagepipeline.io", "text-to-image", "ultra-realistic"], "pinned": false, "pipeline_tag": "text-to-image"}
|
imagepipeline/Dark-Sushi-Mix-2.25D
| null |
[
"diffusers",
"imagepipeline",
"imagepipeline.io",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null |
2024-04-16T03:29:46+00:00
|
[] |
[] |
TAGS
#diffusers #imagepipeline #imagepipeline.io #text-to-image #ultra-realistic #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
Dark-Sushi-Mix-2.25D
--------------------
![Generated on Image Pipeline]()
This checkpoint model is uploaded on URL
Model details -
 on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3708
- F1 Score: 0.8237
- Accuracy: 0.824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5056 | 12.5 | 200 | 0.4640 | 0.7688 | 0.769 |
| 0.454 | 25.0 | 400 | 0.4555 | 0.7700 | 0.77 |
| 0.4389 | 37.5 | 600 | 0.4573 | 0.7761 | 0.776 |
| 0.4271 | 50.0 | 800 | 0.4511 | 0.7838 | 0.784 |
| 0.4183 | 62.5 | 1000 | 0.4510 | 0.7850 | 0.785 |
| 0.4106 | 75.0 | 1200 | 0.4602 | 0.7839 | 0.784 |
| 0.403 | 87.5 | 1400 | 0.4458 | 0.7890 | 0.789 |
| 0.3967 | 100.0 | 1600 | 0.4492 | 0.7900 | 0.79 |
| 0.3899 | 112.5 | 1800 | 0.4540 | 0.7801 | 0.78 |
| 0.3838 | 125.0 | 2000 | 0.4589 | 0.7770 | 0.777 |
| 0.3778 | 137.5 | 2200 | 0.4702 | 0.7779 | 0.778 |
| 0.3708 | 150.0 | 2400 | 0.4743 | 0.7720 | 0.772 |
| 0.3651 | 162.5 | 2600 | 0.4720 | 0.7780 | 0.778 |
| 0.3586 | 175.0 | 2800 | 0.5017 | 0.7716 | 0.772 |
| 0.352 | 187.5 | 3000 | 0.4980 | 0.7770 | 0.777 |
| 0.3463 | 200.0 | 3200 | 0.5043 | 0.7691 | 0.769 |
| 0.3393 | 212.5 | 3400 | 0.5126 | 0.7671 | 0.767 |
| 0.3334 | 225.0 | 3600 | 0.5161 | 0.7590 | 0.759 |
| 0.3281 | 237.5 | 3800 | 0.5270 | 0.7560 | 0.756 |
| 0.3216 | 250.0 | 4000 | 0.5433 | 0.7618 | 0.762 |
| 0.3155 | 262.5 | 4200 | 0.5345 | 0.7650 | 0.765 |
| 0.3108 | 275.0 | 4400 | 0.5465 | 0.7621 | 0.762 |
| 0.3068 | 287.5 | 4600 | 0.5516 | 0.758 | 0.758 |
| 0.3014 | 300.0 | 4800 | 0.5469 | 0.7641 | 0.764 |
| 0.2956 | 312.5 | 5000 | 0.5712 | 0.7630 | 0.763 |
| 0.2922 | 325.0 | 5200 | 0.5693 | 0.7651 | 0.765 |
| 0.2878 | 337.5 | 5400 | 0.5830 | 0.7609 | 0.761 |
| 0.2833 | 350.0 | 5600 | 0.5993 | 0.7620 | 0.762 |
| 0.2801 | 362.5 | 5800 | 0.5872 | 0.7651 | 0.765 |
| 0.2761 | 375.0 | 6000 | 0.5936 | 0.7610 | 0.761 |
| 0.2723 | 387.5 | 6200 | 0.6152 | 0.7640 | 0.764 |
| 0.2684 | 400.0 | 6400 | 0.6041 | 0.7621 | 0.762 |
| 0.2663 | 412.5 | 6600 | 0.6119 | 0.7621 | 0.762 |
| 0.2633 | 425.0 | 6800 | 0.6200 | 0.7641 | 0.764 |
| 0.2605 | 437.5 | 7000 | 0.6179 | 0.7611 | 0.761 |
| 0.258 | 450.0 | 7200 | 0.6266 | 0.7661 | 0.766 |
| 0.2555 | 462.5 | 7400 | 0.6366 | 0.7651 | 0.765 |
| 0.2544 | 475.0 | 7600 | 0.6326 | 0.76 | 0.76 |
| 0.2513 | 487.5 | 7800 | 0.6284 | 0.766 | 0.766 |
| 0.2498 | 500.0 | 8000 | 0.6408 | 0.7620 | 0.762 |
| 0.2475 | 512.5 | 8200 | 0.6369 | 0.7701 | 0.77 |
| 0.2451 | 525.0 | 8400 | 0.6480 | 0.7661 | 0.766 |
| 0.2446 | 537.5 | 8600 | 0.6488 | 0.7620 | 0.762 |
| 0.2438 | 550.0 | 8800 | 0.6485 | 0.7620 | 0.762 |
| 0.2424 | 562.5 | 9000 | 0.6499 | 0.7650 | 0.765 |
| 0.2416 | 575.0 | 9200 | 0.6546 | 0.7630 | 0.763 |
| 0.2411 | 587.5 | 9400 | 0.6572 | 0.7610 | 0.761 |
| 0.2391 | 600.0 | 9600 | 0.6602 | 0.7630 | 0.763 |
| 0.2391 | 612.5 | 9800 | 0.6592 | 0.7621 | 0.762 |
| 0.2381 | 625.0 | 10000 | 0.6610 | 0.7610 | 0.761 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_17M", "model-index": [{"name": "GUE_tf_0-seqsight_8192_512_17M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_tf_0-seqsight_8192_512_17M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_17M",
"region:us"
] | null |
2024-04-16T03:30:59+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us
|
GUE\_tf\_0-seqsight\_8192\_512\_17M-L32\_all
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_17M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3708
* F1 Score: 0.8237
* Accuracy: 0.824
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_1-seqsight_8192_512_17M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_17M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_17M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4515
- F1 Score: 0.8087
- Accuracy: 0.809
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 1536
- eval_batch_size: 1536
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.522 | 10.0 | 200 | 0.5188 | 0.7503 | 0.751 |
| 0.4741 | 20.0 | 400 | 0.5071 | 0.7466 | 0.747 |
| 0.4605 | 30.0 | 600 | 0.5018 | 0.7424 | 0.743 |
| 0.4489 | 40.0 | 800 | 0.5029 | 0.7381 | 0.739 |
| 0.4409 | 50.0 | 1000 | 0.4904 | 0.7530 | 0.753 |
| 0.4339 | 60.0 | 1200 | 0.4921 | 0.748 | 0.748 |
| 0.4259 | 70.0 | 1400 | 0.4954 | 0.7447 | 0.745 |
| 0.4193 | 80.0 | 1600 | 0.4961 | 0.7518 | 0.752 |
| 0.4113 | 90.0 | 1800 | 0.4944 | 0.7459 | 0.746 |
| 0.4048 | 100.0 | 2000 | 0.5017 | 0.7448 | 0.745 |
| 0.3978 | 110.0 | 2200 | 0.5078 | 0.7520 | 0.752 |
| 0.3906 | 120.0 | 2400 | 0.5091 | 0.7382 | 0.739 |
| 0.3836 | 130.0 | 2600 | 0.5239 | 0.7408 | 0.741 |
| 0.3766 | 140.0 | 2800 | 0.5260 | 0.7550 | 0.755 |
| 0.3696 | 150.0 | 3000 | 0.5395 | 0.7404 | 0.741 |
| 0.3643 | 160.0 | 3200 | 0.5443 | 0.7458 | 0.746 |
| 0.3579 | 170.0 | 3400 | 0.5454 | 0.7457 | 0.746 |
| 0.3518 | 180.0 | 3600 | 0.5469 | 0.7459 | 0.746 |
| 0.3471 | 190.0 | 3800 | 0.5572 | 0.7442 | 0.745 |
| 0.3414 | 200.0 | 4000 | 0.5514 | 0.7508 | 0.751 |
| 0.3361 | 210.0 | 4200 | 0.5726 | 0.7516 | 0.752 |
| 0.3315 | 220.0 | 4400 | 0.5715 | 0.7533 | 0.754 |
| 0.3265 | 230.0 | 4600 | 0.5775 | 0.7609 | 0.761 |
| 0.3208 | 240.0 | 4800 | 0.5794 | 0.7483 | 0.749 |
| 0.3175 | 250.0 | 5000 | 0.5817 | 0.7588 | 0.759 |
| 0.3126 | 260.0 | 5200 | 0.5985 | 0.7590 | 0.759 |
| 0.3088 | 270.0 | 5400 | 0.5988 | 0.7565 | 0.757 |
| 0.3058 | 280.0 | 5600 | 0.6090 | 0.7518 | 0.752 |
| 0.3009 | 290.0 | 5800 | 0.6039 | 0.7577 | 0.758 |
| 0.2982 | 300.0 | 6000 | 0.6128 | 0.7550 | 0.755 |
| 0.2935 | 310.0 | 6200 | 0.6252 | 0.7457 | 0.746 |
| 0.29 | 320.0 | 6400 | 0.6210 | 0.7455 | 0.746 |
| 0.2881 | 330.0 | 6600 | 0.6308 | 0.7505 | 0.751 |
| 0.2851 | 340.0 | 6800 | 0.6292 | 0.7538 | 0.754 |
| 0.2818 | 350.0 | 7000 | 0.6355 | 0.7507 | 0.751 |
| 0.2797 | 360.0 | 7200 | 0.6359 | 0.7519 | 0.752 |
| 0.2764 | 370.0 | 7400 | 0.6492 | 0.7446 | 0.745 |
| 0.2749 | 380.0 | 7600 | 0.6525 | 0.7434 | 0.744 |
| 0.2733 | 390.0 | 7800 | 0.6544 | 0.7508 | 0.751 |
| 0.2719 | 400.0 | 8000 | 0.6547 | 0.7517 | 0.752 |
| 0.2693 | 410.0 | 8200 | 0.6610 | 0.7549 | 0.755 |
| 0.267 | 420.0 | 8400 | 0.6642 | 0.7508 | 0.751 |
| 0.2662 | 430.0 | 8600 | 0.6757 | 0.7487 | 0.749 |
| 0.2656 | 440.0 | 8800 | 0.6665 | 0.7547 | 0.755 |
| 0.2653 | 450.0 | 9000 | 0.6660 | 0.7549 | 0.755 |
| 0.2628 | 460.0 | 9200 | 0.6709 | 0.7508 | 0.751 |
| 0.2611 | 470.0 | 9400 | 0.6724 | 0.7519 | 0.752 |
| 0.2629 | 480.0 | 9600 | 0.6700 | 0.7498 | 0.75 |
| 0.262 | 490.0 | 9800 | 0.6699 | 0.7488 | 0.749 |
| 0.2603 | 500.0 | 10000 | 0.6714 | 0.7508 | 0.751 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_17M", "model-index": [{"name": "GUE_tf_1-seqsight_8192_512_17M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_tf_1-seqsight_8192_512_17M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_17M",
"region:us"
] | null |
2024-04-16T03:31:26+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us
|
GUE\_tf\_1-seqsight\_8192\_512\_17M-L32\_all
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_17M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4515
* F1 Score: 0.8087
* Accuracy: 0.809
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 1536
* eval\_batch\_size: 1536
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-sft-orca_chat-full
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the ucla-cmllab/orca-chat_100k-chat-format dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9774 | 1.0 | 781 | 0.9624 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["ucla-cmllab/orca-chat_100k-chat-format"], "base_model": "TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "model-index": [{"name": "tinyllama-sft-orca_chat-full", "results": []}]}
|
andrewbai/tinyllama-sft-orca_chat-full
| null |
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:ucla-cmllab/orca-chat_100k-chat-format",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T03:31:37+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #llama #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #dataset-ucla-cmllab/orca-chat_100k-chat-format #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
tinyllama-sft-orca\_chat-full
=============================
This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T on the ucla-cmllab/orca-chat\_100k-chat-format dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9624
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 128
* total\_eval\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.39.0.dev0
* Pytorch 2.2.2+cu121
* Datasets 2.14.6
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.2.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #llama #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #dataset-ucla-cmllab/orca-chat_100k-chat-format #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.2.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
text2text-generation
|
transformers
|
# How To Use This Model
## Sahidic Example With No Confidence Score
```
from transformers import pipeline
pipe = pipeline(model="megalaa/coptic-english-translator", trust_remote_code=True)
output = pipe("ⲓⲏⲥⲟⲩⲥ ⲡⲉⲭⲣⲓⲥⲧⲟⲥ")
print(output)
# {'translation': 'Jesus Christ,'}
```
## Parameters
By default, this models translates from Sahidic Coptic to English.
Use `from_bohairic=True` if you are translating from Bohairic Coptic to English.
Additionally, use `output_confidence=True` if you want to output the model confidence.
## Bohairic Example With Confidence Score
```
from transformers import pipeline
pipe = pipeline(model="megalaa/coptic-english-translator", trust_remote_code=True)
output = pipe("ⲓⲏⲥ ⲡⲭⲥ", from_bohairic=True, output_confidence=True)
print(output)
# {'translation': 'Jesus Christ.', 'confidence': 0.7219238269534208}
```
|
{"language": ["en", "cop"], "license": "agpl-3.0"}
|
megalaa/coptic-english-translator
| null |
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"en",
"cop",
"license:agpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T03:31:45+00:00
|
[] |
[
"en",
"cop"
] |
TAGS
#transformers #safetensors #marian #text2text-generation #en #cop #license-agpl-3.0 #autotrain_compatible #endpoints_compatible #region-us
|
# How To Use This Model
## Sahidic Example With No Confidence Score
## Parameters
By default, this models translates from Sahidic Coptic to English.
Use 'from_bohairic=True' if you are translating from Bohairic Coptic to English.
Additionally, use 'output_confidence=True' if you want to output the model confidence.
## Bohairic Example With Confidence Score
|
[
"# How To Use This Model",
"## Sahidic Example With No Confidence Score",
"## Parameters\nBy default, this models translates from Sahidic Coptic to English. \n\nUse 'from_bohairic=True' if you are translating from Bohairic Coptic to English. \n\nAdditionally, use 'output_confidence=True' if you want to output the model confidence.",
"## Bohairic Example With Confidence Score"
] |
[
"TAGS\n#transformers #safetensors #marian #text2text-generation #en #cop #license-agpl-3.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# How To Use This Model",
"## Sahidic Example With No Confidence Score",
"## Parameters\nBy default, this models translates from Sahidic Coptic to English. \n\nUse 'from_bohairic=True' if you are translating from Bohairic Coptic to English. \n\nAdditionally, use 'output_confidence=True' if you want to output the model confidence.",
"## Bohairic Example With Confidence Score"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
emozilla/llama-1.1b-init
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T03:37:02+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
token-classification
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
AwesomeREK/concept-extraction-xlnet-early-stopping-teacher-student-self-trained
| null |
[
"transformers",
"safetensors",
"xlnet",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T03:38:47+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #xlnet #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #xlnet #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"tags": ["generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "results", "results": []}]}
|
jfo150/llama-2-brainstems-chat
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T03:41:43+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# results
This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# results\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# results\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
weqweasdas/raft_baseline_zephyr_packing_model6_1_4_e6_weight085
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T03:41:50+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ruBert-base-sberquad-0.005-len_3-filtered-negative
This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "ai-forever/ruBert-base", "model-index": [{"name": "ruBert-base-sberquad-0.005-len_3-filtered-negative", "results": []}]}
|
Shalazary/ruBert-base-sberquad-0.005-len_3-filtered-negative
| null |
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:ai-forever/ruBert-base",
"license:apache-2.0",
"region:us"
] | null |
2024-04-16T03:43:24+00:00
|
[] |
[] |
TAGS
#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us
|
# ruBert-base-sberquad-0.005-len_3-filtered-negative
This model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# ruBert-base-sberquad-0.005-len_3-filtered-negative\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 5000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us \n",
"# ruBert-base-sberquad-0.005-len_3-filtered-negative\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 5000",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-chatGPT
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3771
- Accuracy: 0.9001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.7988668524141836e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4095 | 1.0 | 2250 | 0.3938 | 0.8669 |
| 0.2796 | 2.0 | 4500 | 0.3359 | 0.8888 |
| 0.1891 | 3.0 | 6750 | 0.3771 | 0.9001 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Tokenizers 0.19.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "bert-base-uncased", "model-index": [{"name": "bert-finetuned-chatGPT", "results": []}]}
|
iaminhridoy/bert-finetuned-chatGPT-discourse
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T03:45:55+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
bert-finetuned-chatGPT
======================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3771
* Accuracy: 0.9001
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3.7988668524141836e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 3
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Tokenizers 0.19.1
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3.7988668524141836e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 3\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Tokenizers 0.19.1"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3.7988668524141836e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 3\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Tokenizers 0.19.1"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ClasificadorMotivoMora-Distilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5702
- Accuracy: 0.8095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.755 | 1.0 | 845 | 0.7609 | 0.7656 |
| 0.6265 | 2.0 | 1690 | 0.6030 | 0.8077 |
| 0.5401 | 3.0 | 2535 | 0.5702 | 0.8095 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "ClasificadorMotivoMora-Distilbert", "results": []}]}
|
Arodrigo/ClasificadorMotivoMora-Distilbert
| null |
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T03:46:01+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
ClasificadorMotivoMora-Distilbert
=================================
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5702
* Accuracy: 0.8095
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
swj0419/email_STEP0000009
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-16T03:47:00+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
transformers
|
# DavidAU/PIPPA-Named-7B-Q6_K-GGUF
This model was converted to GGUF format from [`mpasila/PIPPA-Named-7B`](https://huggingface.co/mpasila/PIPPA-Named-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mpasila/PIPPA-Named-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/PIPPA-Named-7B-Q6_K-GGUF --model pippa-named-7b.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/PIPPA-Named-7B-Q6_K-GGUF --model pippa-named-7b.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m pippa-named-7b.Q6_K.gguf -n 128
```
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft", "llama-cpp", "gguf-my-repo"], "datasets": ["mpasila/PIPPA-ShareGPT-formatted-named", "KaraKaraWitch/PIPPA-ShareGPT-formatted"], "base_model": "unsloth/mistral-7b-v0.2-bnb-4bit"}
|
DavidAU/PIPPA-Named-7B-Q6_K-GGUF
| null |
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:mpasila/PIPPA-ShareGPT-formatted-named",
"dataset:KaraKaraWitch/PIPPA-ShareGPT-formatted",
"base_model:unsloth/mistral-7b-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-16T03:48:06+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #text-generation-inference #unsloth #mistral #trl #sft #llama-cpp #gguf-my-repo #en #dataset-mpasila/PIPPA-ShareGPT-formatted-named #dataset-KaraKaraWitch/PIPPA-ShareGPT-formatted #base_model-unsloth/mistral-7b-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# DavidAU/PIPPA-Named-7B-Q6_K-GGUF
This model was converted to GGUF format from 'mpasila/PIPPA-Named-7B' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# DavidAU/PIPPA-Named-7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'mpasila/PIPPA-Named-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#transformers #gguf #text-generation-inference #unsloth #mistral #trl #sft #llama-cpp #gguf-my-repo #en #dataset-mpasila/PIPPA-ShareGPT-formatted-named #dataset-KaraKaraWitch/PIPPA-ShareGPT-formatted #base_model-unsloth/mistral-7b-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# DavidAU/PIPPA-Named-7B-Q6_K-GGUF\nThis model was converted to GGUF format from 'mpasila/PIPPA-Named-7B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_4-seqsight_8192_512_17M-L32_all
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_8192_512_17M](https://huggingface.co/mahdibaghbanzadeh/seqsight_8192_512_17M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5717
- F1 Score: 0.8480
- Accuracy: 0.848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2048
- eval_batch_size: 2048
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5004 | 20.0 | 200 | 0.4865 | 0.7625 | 0.763 |
| 0.4312 | 40.0 | 400 | 0.4713 | 0.7699 | 0.77 |
| 0.4058 | 60.0 | 600 | 0.4662 | 0.7736 | 0.774 |
| 0.3842 | 80.0 | 800 | 0.4590 | 0.7889 | 0.789 |
| 0.3622 | 100.0 | 1000 | 0.4553 | 0.8029 | 0.803 |
| 0.3425 | 120.0 | 1200 | 0.4609 | 0.8075 | 0.808 |
| 0.3208 | 140.0 | 1400 | 0.4531 | 0.8110 | 0.811 |
| 0.3017 | 160.0 | 1600 | 0.4534 | 0.8059 | 0.806 |
| 0.2847 | 180.0 | 1800 | 0.4542 | 0.8128 | 0.813 |
| 0.2674 | 200.0 | 2000 | 0.4574 | 0.8209 | 0.821 |
| 0.2506 | 220.0 | 2200 | 0.4612 | 0.8223 | 0.823 |
| 0.2372 | 240.0 | 2400 | 0.4587 | 0.8258 | 0.826 |
| 0.223 | 260.0 | 2600 | 0.4813 | 0.8261 | 0.827 |
| 0.2102 | 280.0 | 2800 | 0.4743 | 0.8346 | 0.835 |
| 0.199 | 300.0 | 3000 | 0.4895 | 0.8393 | 0.84 |
| 0.1896 | 320.0 | 3200 | 0.4877 | 0.8447 | 0.845 |
| 0.1778 | 340.0 | 3400 | 0.5176 | 0.8443 | 0.845 |
| 0.1685 | 360.0 | 3600 | 0.5253 | 0.8422 | 0.843 |
| 0.1579 | 380.0 | 3800 | 0.5249 | 0.8507 | 0.851 |
| 0.1519 | 400.0 | 4000 | 0.5456 | 0.8465 | 0.847 |
| 0.1439 | 420.0 | 4200 | 0.5699 | 0.8421 | 0.843 |
| 0.138 | 440.0 | 4400 | 0.5749 | 0.8433 | 0.844 |
| 0.1317 | 460.0 | 4600 | 0.6049 | 0.8411 | 0.842 |
| 0.1259 | 480.0 | 4800 | 0.5963 | 0.8454 | 0.846 |
| 0.1218 | 500.0 | 5000 | 0.6160 | 0.8412 | 0.842 |
| 0.1163 | 520.0 | 5200 | 0.6487 | 0.8401 | 0.841 |
| 0.1128 | 540.0 | 5400 | 0.6055 | 0.8515 | 0.852 |
| 0.1082 | 560.0 | 5600 | 0.6416 | 0.8433 | 0.844 |
| 0.1055 | 580.0 | 5800 | 0.6497 | 0.8412 | 0.842 |
| 0.1015 | 600.0 | 6000 | 0.6083 | 0.8535 | 0.854 |
| 0.1 | 620.0 | 6200 | 0.6507 | 0.8423 | 0.843 |
| 0.0961 | 640.0 | 6400 | 0.6548 | 0.8402 | 0.841 |
| 0.094 | 660.0 | 6600 | 0.6533 | 0.8474 | 0.848 |
| 0.0928 | 680.0 | 6800 | 0.6730 | 0.8362 | 0.837 |
| 0.09 | 700.0 | 7000 | 0.6638 | 0.8444 | 0.845 |
| 0.0881 | 720.0 | 7200 | 0.6935 | 0.8381 | 0.839 |
| 0.0864 | 740.0 | 7400 | 0.6718 | 0.8402 | 0.841 |
| 0.0839 | 760.0 | 7600 | 0.6885 | 0.8391 | 0.84 |
| 0.0823 | 780.0 | 7800 | 0.7107 | 0.8361 | 0.837 |
| 0.0818 | 800.0 | 8000 | 0.6827 | 0.8443 | 0.845 |
| 0.0803 | 820.0 | 8200 | 0.7020 | 0.8372 | 0.838 |
| 0.0796 | 840.0 | 8400 | 0.7019 | 0.8413 | 0.842 |
| 0.0784 | 860.0 | 8600 | 0.7179 | 0.8392 | 0.84 |
| 0.077 | 880.0 | 8800 | 0.7040 | 0.8443 | 0.845 |
| 0.0767 | 900.0 | 9000 | 0.7003 | 0.8454 | 0.846 |
| 0.0762 | 920.0 | 9200 | 0.7067 | 0.8463 | 0.847 |
| 0.0752 | 940.0 | 9400 | 0.7150 | 0.8453 | 0.846 |
| 0.075 | 960.0 | 9600 | 0.7132 | 0.8433 | 0.844 |
| 0.0736 | 980.0 | 9800 | 0.7169 | 0.8433 | 0.844 |
| 0.0747 | 1000.0 | 10000 | 0.7159 | 0.8433 | 0.844 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_8192_512_17M", "model-index": [{"name": "GUE_tf_4-seqsight_8192_512_17M-L32_all", "results": []}]}
|
mahdibaghbanzadeh/GUE_tf_4-seqsight_8192_512_17M-L32_all
| null |
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_8192_512_17M",
"region:us"
] | null |
2024-04-16T03:48:16+00:00
|
[] |
[] |
TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us
|
GUE\_tf\_4-seqsight\_8192\_512\_17M-L32\_all
============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_8192\_512\_17M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5717
* F1 Score: 0.8480
* Accuracy: 0.848
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 2048
* eval\_batch\_size: 2048
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_8192_512_17M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.