modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-08 06:28:24
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 492
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-08 06:28:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
pinzhenchen/sft-lora-fr-pythia-1b
|
pinzhenchen
| 2024-03-05T23:51:46Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"fr",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:51:43Z |
---
language:
- fr
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped)
* Instruction tuning language: French
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-es-pythia-1b
|
pinzhenchen
| 2024-03-05T23:51:37Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"es",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:51:34Z |
---
language:
- es
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped)
* Instruction tuning language: Spanish
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-en-pythia-1b
|
pinzhenchen
| 2024-03-05T23:51:32Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"en",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:51:30Z |
---
language:
- en
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped)
* Instruction tuning language: English
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-ru-pythia-410m
|
pinzhenchen
| 2024-03-05T23:51:12Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"ru",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:51:09Z |
---
language:
- ru
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped)
* Instruction tuning language: Russian
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-fr-pythia-410m
|
pinzhenchen
| 2024-03-05T23:51:08Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"fr",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:51:05Z |
---
language:
- fr
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped)
* Instruction tuning language: French
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-fi-pythia-410m
|
pinzhenchen
| 2024-03-05T23:51:04Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"fi",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:51:01Z |
---
language:
- fi
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped)
* Instruction tuning language: Finnish
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-cs-pythia-410m
|
pinzhenchen
| 2024-03-05T23:50:48Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"cs",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:50:45Z |
---
language:
- cs
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped)
* Instruction tuning language: Czech
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-ru-pythia-160m
|
pinzhenchen
| 2024-03-05T23:50:35Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"ru",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:50:31Z |
---
language:
- ru
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-160m-deduped](https://huggingface.co/EleutherAI/pythia-160m-deduped)
* Instruction tuning language: Russian
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-fi-pythia-160m
|
pinzhenchen
| 2024-03-05T23:50:26Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"fi",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:50:23Z |
---
language:
- fi
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-160m-deduped](https://huggingface.co/EleutherAI/pythia-160m-deduped)
* Instruction tuning language: Finnish
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-bg-pythia-160m
|
pinzhenchen
| 2024-03-05T23:50:06Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"bg",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:50:03Z |
---
language:
- bg
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-160m-deduped](https://huggingface.co/EleutherAI/pythia-160m-deduped)
* Instruction tuning language: Bulgarian
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-zh-pythia-70m
|
pinzhenchen
| 2024-03-05T23:50:02Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"zh",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:49:59Z |
---
language:
- zh
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped)
* Instruction tuning language: Chinese
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-cs-pythia-70m
|
pinzhenchen
| 2024-03-05T23:49:35Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"cs",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:49:33Z |
---
language:
- cs
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped)
* Instruction tuning language: Czech
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-zh-ollama-13b
|
pinzhenchen
| 2024-03-05T23:49:27Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"zh",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:49:24Z |
---
language:
- zh
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [openlm-research/open_llama_13b](https://huggingface.co/openlm-research/open_llama_13b)
* Instruction tuning language: Chinese
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-en-ollama-13b
|
pinzhenchen
| 2024-03-05T23:49:22Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"en",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:49:19Z |
---
language:
- en
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [openlm-research/open_llama_13b](https://huggingface.co/openlm-research/open_llama_13b)
* Instruction tuning language: English
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-es-ollama-7b
|
pinzhenchen
| 2024-03-05T23:49:04Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"es",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:49:01Z |
---
language:
- es
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [openlm-research/open_llama_7b](https://huggingface.co/openlm-research/open_llama_7b)
* Instruction tuning language: Spanish
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-en-ollama-7b
|
pinzhenchen
| 2024-03-05T23:48:59Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"en",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:48:56Z |
---
language:
- en
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [openlm-research/open_llama_7b](https://huggingface.co/openlm-research/open_llama_7b)
* Instruction tuning language: English
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-bg-ollama-7b
|
pinzhenchen
| 2024-03-05T23:48:55Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"bg",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:48:52Z |
---
language:
- bg
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [openlm-research/open_llama_7b](https://huggingface.co/openlm-research/open_llama_7b)
* Instruction tuning language: Bulgarian
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-zh-ollama-3b
|
pinzhenchen
| 2024-03-05T23:48:51Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"zh",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:48:48Z |
---
language:
- zh
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b)
* Instruction tuning language: Chinese
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-es-ollama-3b
|
pinzhenchen
| 2024-03-05T23:48:33Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"es",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:48:30Z |
---
language:
- es
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b)
* Instruction tuning language: Spanish
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
abhilad98/mpnet
|
abhilad98
| 2024-03-05T23:48:30Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-03-05T23:48:07Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# abhilad98/mpnet
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('abhilad98/mpnet')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('abhilad98/mpnet')
model = AutoModel.from_pretrained('abhilad98/mpnet')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=abhilad98/mpnet)
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 186 with parameters:
```
{'batch_size': 8}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 186,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
pinzhenchen/sft-lora-de-ollama-3b
|
pinzhenchen
| 2024-03-05T23:48:24Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"de",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:48:21Z |
---
language:
- de
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b)
* Instruction tuning language: German
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-bg-ollama-3b
|
pinzhenchen
| 2024-03-05T23:48:16Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"bg",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:48:12Z |
---
language:
- bg
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b)
* Instruction tuning language: Bulgarian
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-fr-bloom-7b1
|
pinzhenchen
| 2024-03-05T23:48:02Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"fr",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:47:58Z |
---
language:
- fr
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-7b1](https://huggingface.co/bigscience/bloom-7b1)
* Instruction tuning language: French
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-es-bloom-7b1
|
pinzhenchen
| 2024-03-05T23:47:57Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"es",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:47:54Z |
---
language:
- es
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-7b1](https://huggingface.co/bigscience/bloom-7b1)
* Instruction tuning language: Spanish
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-bg-bloom-7b1
|
pinzhenchen
| 2024-03-05T23:47:49Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"bg",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:47:46Z |
---
language:
- bg
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-7b1](https://huggingface.co/bigscience/bloom-7b1)
* Instruction tuning language: Bulgarian
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-zh-bloom-3b
|
pinzhenchen
| 2024-03-05T23:47:44Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"zh",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:47:42Z |
---
language:
- zh
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b)
* Instruction tuning language: Chinese
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-ru-bloom-3b
|
pinzhenchen
| 2024-03-05T23:47:40Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"ru",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:47:37Z |
---
language:
- ru
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b)
* Instruction tuning language: Russian
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-fr-bloom-3b
|
pinzhenchen
| 2024-03-05T23:47:36Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"fr",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:47:33Z |
---
language:
- fr
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b)
* Instruction tuning language: French
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-es-bloom-3b
|
pinzhenchen
| 2024-03-05T23:47:32Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"es",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:47:29Z |
---
language:
- es
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b)
* Instruction tuning language: Spanish
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-en-bloom-3b
|
pinzhenchen
| 2024-03-05T23:47:27Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"en",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:47:25Z |
---
language:
- en
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b)
* Instruction tuning language: English
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-de-bloom-3b
|
pinzhenchen
| 2024-03-05T23:47:24Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"de",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:47:20Z |
---
language:
- de
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b)
* Instruction tuning language: German
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-cs-bloom-3b
|
pinzhenchen
| 2024-03-05T23:47:19Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"cs",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:47:16Z |
---
language:
- cs
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b)
* Instruction tuning language: Czech
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-bg-bloom-3b
|
pinzhenchen
| 2024-03-05T23:47:15Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"bg",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:47:12Z |
---
language:
- bg
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b)
* Instruction tuning language: Bulgarian
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-ru-bloom-1b7
|
pinzhenchen
| 2024-03-05T23:47:05Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"ru",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:47:02Z |
---
language:
- ru
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7)
* Instruction tuning language: Russian
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-fi-bloom-1b7
|
pinzhenchen
| 2024-03-05T23:46:57Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"fi",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:46:54Z |
---
language:
- fi
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7)
* Instruction tuning language: Finnish
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-en-bloom-1b7
|
pinzhenchen
| 2024-03-05T23:46:49Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"en",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:46:46Z |
---
language:
- en
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7)
* Instruction tuning language: English
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-de-bloom-1b7
|
pinzhenchen
| 2024-03-05T23:46:45Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"de",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:46:42Z |
---
language:
- de
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7)
* Instruction tuning language: German
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-cs-bloom-1b7
|
pinzhenchen
| 2024-03-05T23:46:41Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"cs",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:46:38Z |
---
language:
- cs
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7)
* Instruction tuning language: Czech
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-bg-bloom-1b7
|
pinzhenchen
| 2024-03-05T23:46:37Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"bg",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:46:35Z |
---
language:
- bg
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7)
* Instruction tuning language: Bulgarian
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-zh-bloom-1b1
|
pinzhenchen
| 2024-03-05T23:46:34Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"zh",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:46:31Z |
---
language:
- zh
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1)
* Instruction tuning language: Chinese
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-ru-bloom-1b1
|
pinzhenchen
| 2024-03-05T23:46:30Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"ru",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:46:27Z |
---
language:
- ru
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1)
* Instruction tuning language: Russian
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-fi-bloom-1b1
|
pinzhenchen
| 2024-03-05T23:46:22Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"fi",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:46:19Z |
---
language:
- fi
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1)
* Instruction tuning language: Finnish
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-es-bloom-1b1
|
pinzhenchen
| 2024-03-05T23:46:18Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"es",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:46:15Z |
---
language:
- es
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1)
* Instruction tuning language: Spanish
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-en-bloom-1b1
|
pinzhenchen
| 2024-03-05T23:46:14Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"en",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:46:11Z |
---
language:
- en
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1)
* Instruction tuning language: English
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-de-bloom-1b1
|
pinzhenchen
| 2024-03-05T23:46:10Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"de",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:46:08Z |
---
language:
- de
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1)
* Instruction tuning language: German
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-en-bloom-560m
|
pinzhenchen
| 2024-03-05T23:45:39Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"en",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:45:36Z |
---
language:
- en
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m)
* Instruction tuning language: English
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-de-bloom-560m
|
pinzhenchen
| 2024-03-05T23:45:34Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"de",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:45:32Z |
---
language:
- de
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m)
* Instruction tuning language: German
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-cs-bloom-560m
|
pinzhenchen
| 2024-03-05T23:45:31Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"cs",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:45:28Z |
---
language:
- cs
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m)
* Instruction tuning language: Czech
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-bg-bloom-560m
|
pinzhenchen
| 2024-03-05T23:45:26Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"bg",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:45:23Z |
---
language:
- bg
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m)
* Instruction tuning language: Bulgarian
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-ru-baichuan-2-7b
|
pinzhenchen
| 2024-03-05T23:45:17Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"ru",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:45:14Z |
---
language:
- ru
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [baichuan-inc/Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base)
* Instruction tuning language: Russian
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
pinzhenchen/sft-lora-en-baichuan-2-7b
|
pinzhenchen
| 2024-03-05T23:45:03Z | 0 | 0 | null |
[
"generation",
"question answering",
"instruction tuning",
"en",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-05T23:44:59Z |
---
language:
- en
tags:
- generation
- question answering
- instruction tuning
license: cc-by-nc-4.0
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [baichuan-inc/Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base)
* Instruction tuning language: English
* Training method: LoRA.
* LoRA details: rank=8, alpha=16, target modules={key, query, value}.
* Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
anilerkul/crossing-sentiment-team-based-splitting-model
|
anilerkul
| 2024-03-05T23:44:46Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-05T23:44:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JONGYUN/DPO_Test_2
|
JONGYUN
| 2024-03-05T23:44:19Z | 61 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-21T04:03:49Z |
---
license: apache-2.0
---
language:
- ko
pipeline_tag: text-generation
---
# Llama-2-7b-hf dpo test model
### Model Details
- Developed by: JongYun CHOI
- Backbone Model: yanolja/KoSOLAR-10.7B-v0.2
- Library: [transformers](https://github.com/huggingface/transformers)
-
### Used Datasets
- private dataset
### Prompt Template
```
### 질문: {Instruction}
### 답변: {Answer}
```
|
MrezaPRZ/codellama-osquery
|
MrezaPRZ
| 2024-03-05T23:38:00Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-05T23:34:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
anilerkul/crossing-check-match-based-model
|
anilerkul
| 2024-03-05T23:33:30Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-05T00:13:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
neuralmagic/OpenHermes-2.5-Mistral-7B-pruned50
|
neuralmagic
| 2024-03-05T23:33:12Z | 50 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"nm-vllm",
"sparse",
"conversational",
"arxiv:2301.00774",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:finetune:teknium/OpenHermes-2.5-Mistral-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-30T08:32:58Z |
---
base_model: teknium/OpenHermes-2.5-Mistral-7B
inference: true
model_type: mistral
quantized_by: mgoin
tags:
- nm-vllm
- sparse
---
## OpenHermes-2.5-Mistral-7B-pruned50
This repo contains model files for [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) optimized for [nm-vllm](https://github.com/neuralmagic/nm-vllm), a high-throughput serving engine for compressed LLMs.
This model was pruned with [SparseGPT](https://arxiv.org/abs/2301.00774), using [SparseML](https://github.com/neuralmagic/sparseml).
## Inference
Install [nm-vllm](https://github.com/neuralmagic/nm-vllm) for fast inference and low memory-usage:
```bash
pip install nm-vllm[sparse]
```
Run in a Python pipeline for local inference:
```python
from vllm import LLM, SamplingParams
model = LLM("nm-testing/OpenHermes-2.5-Mistral-7B-pruned50", sparsity="sparse_w16a16")
prompt = "How to make banana bread?"
formatted_prompt = f"<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
sampling_params = SamplingParams(max_tokens=100)
outputs = model.generate(formatted_prompt, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
"""
Here is a simple recipe for making banana bread:
Ingredients:
- 3 ripe bananas
- 2 eggs
- 1/2 cup of sugar
- 1/2 cup of butter
- 2 cups of flour
- 1 teaspoon baking powder
- 2 teaspoons of baking soda
Instructions:
1. Preheat your oven at 350 degree Fahrenant.
"""
```
## Prompt template
```
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Sparsification
For details on how this model was sparsified, see the `recipe.yaml` in this repo and follow the instructions below.
Install [SparseML](https://github.com/neuralmagic/sparseml):
```bash
git clone https://github.com/neuralmagic/sparseml
pip install -e "sparseml[transformers]"
```
Replace the recipe as you like and run this one-shot compression script to apply SparseGPT:
```python
import sparseml.transformers
original_model_name = "teknium/OpenHermes-2.5-Mistral-7B"
calibration_dataset = "open_platypus"
output_directory = "output/"
recipe = """
test_stage:
obcq_modifiers:
SparseGPTModifier:
sparsity: 0.5
sequential_update: true
mask_structure: 0:0
targets: ['re:model.layers.\d*$']
"""
# Apply SparseGPT to the model
sparseml.transformers.oneshot(
model=original_model_name,
dataset=calibration_dataset,
recipe=recipe,
output_dir=output_directory,
)
```
## Slack
For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ)
|
crossroderick/ppo-Pyramids
|
crossroderick
| 2024-03-05T23:32:58Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2024-03-05T23:32:53Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: crossroderick/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
NeuralNovel/Valor-7B-v0.1
|
NeuralNovel
| 2024-03-05T23:30:03Z | 0 | 10 |
peft
|
[
"peft",
"safetensors",
"mistral",
"generated_from_trainer",
"dataset:NeuralNovel/Neural-Story-v1",
"base_model:alnrg2arg/blockchainlabs_7B_merged_test2_4",
"base_model:adapter:alnrg2arg/blockchainlabs_7B_merged_test2_4",
"license:apache-2.0",
"region:us"
] | null | 2024-01-20T21:37:43Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
datasets:
- NeuralNovel/Neural-Story-v1
base_model: alnrg2arg/blockchainlabs_7B_merged_test2_4
model-index:
- name: qlora-out
results: []
---

# NeuralNovel/Valor-7B-v0.1
Valor speaks louder than words.
This is a qlora finetune of blockchainlabs_7B_merged_test2_4 using the **Neural-Story-v0.1** dataset, with the intention of increasing creativity and writing ability.
<a href='https://ko-fi.com/S6S2UH2TC' target='_blank'><img height='38' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
<a href='https://discord.gg/KFS229xD' target='_blank'><img width='140' height='500' style='border:0px;height:36px;' src='https://i.ibb.co/tqwznYM/Discord-button.png' border='0' alt='Join Our Discord!' /></a>

# Training Details
```yaml
base_model: alnrg2arg/blockchainlabs_7B_merged_test2_4
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: NeuralNovel/Neural-Story-v1
type: completion
dataset_prepared_path: last_run_prepared
val_set_size: 0.1
output_dir: ./qlora-out
adapter: qlora
lora_model_dir:
sequence_len: 8192
sample_packing: false
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_table_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# qlora-out
This model is a fine-tuned version of [alnrg2arg/blockchainlabs_7B_merged_test2_4](https://huggingface.co/alnrg2arg/blockchainlabs_7B_merged_test2_4) on the Neural-Story-v1.
It achieves the following results on the evaluation set:
- Loss: 2.1411
axolotl version: `0.3.0`
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3251 | 0.06 | 1 | 2.8409 |
| 2.5318 | 0.25 | 4 | 2.7634 |
| 1.7316 | 0.51 | 8 | 2.3662 |
| 1.5196 | 0.76 | 12 | 2.1411 |
### Framework versions
- PEFT 0.7.0
- Transformers 4.37.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.0
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NeuralNovel__Valor-7B-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.21|
|AI2 Reasoning Challenge (25-Shot)|72.27|
|HellaSwag (10-Shot) |86.59|
|MMLU (5-Shot) |64.09|
|TruthfulQA (0-shot) |69.84|
|Winogrande (5-shot) |83.35|
|GSM8k (5-shot) |69.14|
|
cik009/gemma-2b-it-q4f16_0-MLC
|
cik009
| 2024-03-05T23:29:00Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2024-03-05T23:17:24Z |
---
license: other
license_name: gemma
license_link: https://ai.google.dev/gemma/terms
---
|
dranger003/OpenCodeInterpreter-DS-33B-iMat.GGUF
|
dranger003
| 2024-03-05T23:28:04Z | 13 | 2 |
gguf
|
[
"gguf",
"text-generation",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-05T19:43:49Z |
---
license: other
license_name: deepseek-license
license_link: >-
https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct/blob/main/LICENSE
pipeline_tag: text-generation
library_name: gguf
---
<u>**NOTE**</u>: You will need a recent build of llama.cpp to run these quants (i.e. at least commit `494c870`).
GGUF importance matrix (imatrix) quants for https://huggingface.co/m-a-p/OpenCodeInterpreter-DS-33B
* The importance matrix was trained for ~50K tokens (105 batches of 512 tokens) using a [general purpose imatrix calibration dataset](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384).
* The [imatrix is being used on the K-quants](https://github.com/ggerganov/llama.cpp/pull/4930) as well.
> OpenCodeInterpreter is a family of open-source code generation systems designed to bridge the gap between large language models and advanced proprietary systems like the GPT-4 Code Interpreter. It significantly advances code generation capabilities by integrating execution and iterative refinement functionalities. This model is based on [deepseek-coder-33b-base](https://huggingface.co/deepseek-ai/deepseek-coder-33b-base).
| Layers | Context | Template |
| --- | --- | --- |
| <pre>62</pre> | <pre>16384</pre> | <pre>\<|begin▁of▁sentence|\>[INST] \<\<SYS\>\><br>{instructions}<br>\<\</SYS\>\><br><br>{prompt} [/INST]</pre> |
|
NeuralNovel/Senzu-7B-v0.1
|
NeuralNovel
| 2024-03-05T23:27:58Z | 28 | 6 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"dataset:practical-dreamer/RPGPT_PublicDomain-alpaca",
"dataset:shuyuej/metamath_gsm8k",
"dataset:NeuralNovel/Neural-DPO",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-26T08:15:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- practical-dreamer/RPGPT_PublicDomain-alpaca
- shuyuej/metamath_gsm8k
- NeuralNovel/Neural-DPO
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# NeuralNovel/Senzu-7B-v0.1
Embracing a quiet *storm* ..
## Model Details
This model is a full parameter fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
Trained on the Neural-DPO, metamath_gsm8k and RPGPT_PublicDomain-alpaca dataset.
This model excels at character roleplay, also with the ability of responding accurately to a wide variety of complex questions.
<a href='https://ko-fi.com/S6S2UH2TC' target='_blank'><img height='38' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
<a href='https://discord.gg/KFS229xD' target='_blank'><img width='140' height='500' style='border:0px;height:36px;' src='https://i.ibb.co/tqwznYM/Discord-button.png' border='0' alt='Join Our Discord!' /></a>
```yaml
base_model: mistralai/Mistral-7B-v0.1
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: practical-dreamer/RPGPT_PublicDomain-alpaca
type: alpaca
format: "[INST] {instruction} [/INST]"
no_input_format: "[INST] {instruction} [/INST]"
datasets:
- path: shuyuej/metamath_gsm8k
type: jeopardy
format: "[INST] {instruction} [/INST]"
no_input_format: "[INST] {instruction} [/INST]"
datasets:
- path: NeuralNovel/Neural-DPO
type:
system_prompt: ""
field_system: system
field_instruction: chosen
field_output: chosen
format: "[INST] {instruction} [/INST]"
no_input_format: "[INST] {instruction} [/INST]"
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./out
sequence_len: 8192
sample_packing: false
pad_to_sequence_len: true
eval_sample_packing: false
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.000005
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 0
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2061 | 0.01 | 1 | 0.3139 |
| 0.0 | 0.25 | 32 | 0.0000 |
| 0.0 | 0.5 | 64 | 0.0010 |
| 0.0 | 0.76 | 96 | 0.0000 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.0
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NeuralNovel__Senzu-7B-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |56.40|
|AI2 Reasoning Challenge (25-Shot)|58.19|
|HellaSwag (10-Shot) |81.98|
|MMLU (5-Shot) |63.20|
|TruthfulQA (0-shot) |40.20|
|Winogrande (5-shot) |76.64|
|GSM8k (5-shot) |18.20|
|
NeuralNovel/Senzu-7B-v0.1-DPO
|
NeuralNovel
| 2024-03-05T23:26:46Z | 11 | 7 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"dataset:practical-dreamer/RPGPT_PublicDomain-alpaca",
"dataset:shuyuej/metamath_gsm8k",
"dataset:NeuralNovel/Neural-DPO",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-26T20:54:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- practical-dreamer/RPGPT_PublicDomain-alpaca
- shuyuej/metamath_gsm8k
- NeuralNovel/Neural-DPO
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# NeuralNovel/Senzu-7B-v0.1-DPO
Embracing a quiet *storm* ..
## Model Details
This model is Senzu-7B-v0.1 a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
DPO Trained on the Neural-DPO dataset.
This model excels at character based roleplay.
<a href='https://ko-fi.com/S6S2UH2TC' target='_blank'><img height='38' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
<a href='https://discord.gg/KFS229xD' target='_blank'><img width='140' height='500' style='border:0px;height:36px;' src='https://i.ibb.co/tqwznYM/Discord-button.png' border='0' alt='Join Our Discord!' /></a>
## Training Parameters
```yaml
base_model: mistralai/Mistral-7B-v0.1
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: practical-dreamer/RPGPT_PublicDomain-alpaca
type: alpaca
format: "[INST] {instruction} [/INST]"
no_input_format: "[INST] {instruction} [/INST]"
datasets:
- path: shuyuej/metamath_gsm8k
type: jeopardy
format: "[INST] {instruction} [/INST]"
no_input_format: "[INST] {instruction} [/INST]"
datasets:
- path: NeuralNovel/Neural-DPO
type:
system_prompt: ""
field_system: system
field_instruction: chosen
field_output: chosen
format: "[INST] {instruction} [/INST]"
no_input_format: "[INST] {instruction} [/INST]"
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./out
sequence_len: 8192
sample_packing: false
pad_to_sequence_len: true
eval_sample_packing: false
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.000005
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 0
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2061 | 0.01 | 1 | 0.3139 |
| 0.0 | 0.25 | 32 | 0.0000 |
| 0.0 | 0.5 | 64 | 0.0010 |
| 0.0 | 0.76 | 96 | 0.0000 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.0
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NeuralNovel__Senzu-7B-v0.1-DPO)
| Metric |Value|
|---------------------------------|----:|
|Avg. |61.90|
|AI2 Reasoning Challenge (25-Shot)|66.72|
|HellaSwag (10-Shot) |84.34|
|MMLU (5-Shot) |62.12|
|TruthfulQA (0-shot) |45.29|
|Winogrande (5-shot) |79.95|
|GSM8k (5-shot) |32.98|
|
jamiehudson/725_model_v4
|
jamiehudson
| 2024-03-05T23:24:43Z | 5 | 0 |
setfit
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:finetune:BAAI/bge-small-en-v1.5",
"model-index",
"region:us"
] |
text-classification
| 2024-03-05T23:24:32Z |
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
- f1
- precision
- recall
widget:
- text: man, product/whatever is my new best friend. i like product but the integration
of product into office and product is a lot of fun. i just spent the day feeding
it my training presentation i'm preparing in my day job and it was very helpful.
almost better than humans.
- text: that's great news! product is the perfect platform to share these advanced
product prompts and help more users get the most out of it!
- text: after only one week's trial of the new product with brand enabled, i have
replaced my default browser product that i was using for more than 7 years with
new product. i no longer need to spend a lot of time finding answers from a bunch
of search results and web pages. it's amazing
- text: very impressive. brand is finally fighting back. i am just a little worried
about the scalability of such a high context window size, since even in their
demos it took quite a while to process everything. regardless, i am very interested
in seeing what types of capabilities a >1m token size window can unleash.
- text: product the way it shows the sources is so fucking cool, this new ai is amazing
pipeline_tag: text-classification
inference: true
base_model: BAAI/bge-small-en-v1.5
model-index:
- name: SetFit with BAAI/bge-small-en-v1.5
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.964
name: Accuracy
- type: f1
value:
- 0.9130434782608695
- 0.888888888888889
- 0.9779951100244498
name: F1
- type: precision
value:
- 0.9545454545454546
- 1.0
- 0.9615384615384616
name: Precision
- type: recall
value:
- 0.875
- 0.8
- 0.9950248756218906
name: Recall
---
# SetFit with BAAI/bge-small-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:--------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| neither | <ul><li>'i asked brand to write it and then let it translate back. so in reality i have no clue what i am sending...'</li><li>"i saw someone summarize brand the other day; it doesn't give answers, it gives answer-shaped responses."</li><li>'thank you comrade i mean colleague. i will have brand summarize.'</li></ul> |
| peak | <ul><li>'brand!! it helped me finish my resume. i just asked it if it could write my resume based on horribly written descriptions i came up with. and it made it all pretty:)'</li><li>'been building products for a bit now and your product (audio pen) is simple, useful and just works (like the early magic when product came out). congratulations and keep the flag flying high. not surprised that india is producing apps like yours. high time:-)'</li><li>'just got access to personalization in brand!! totally unexpected. very happy'</li></ul> |
| pit | <ul><li>'brand recently i came across a very unwell patient in a psychiatric unit who was using product & this was reinforcing his delusional state & detrimentally impacting his mental health. anyone looking into this type of usage of product? what safe guards are being put in place?'</li><li>'brand product is def better at extracting numbers from images, product failed (pro version) twice...'</li><li>"the stuff brand gives is entirely too scripted *and* impractical, which is what i'm trying to avoid:/"</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy | F1 | Precision | Recall |
|:--------|:---------|:------------------------------------------------------------|:----------------------------------------------|:---------------------------------|
| **all** | 0.964 | [0.9130434782608695, 0.888888888888889, 0.9779951100244498] | [0.9545454545454546, 1.0, 0.9615384615384616] | [0.875, 0.8, 0.9950248756218906] |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("jamiehudson/725_model_v4")
# Run inference
preds = model("product the way it shows the sources is so fucking cool, this new ai is amazing")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 3 | 31.6606 | 98 |
| Label | Training Sample Count |
|:--------|:----------------------|
| pit | 277 |
| peak | 265 |
| neither | 1105 |
### Training Hyperparameters
- batch_size: (32, 32)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0000 | 1 | 0.2683 | - |
| 0.0012 | 50 | 0.2643 | - |
| 0.0023 | 100 | 0.2432 | - |
| 0.0035 | 150 | 0.2623 | - |
| 0.0047 | 200 | 0.2527 | - |
| 0.0058 | 250 | 0.2252 | - |
| 0.0070 | 300 | 0.2362 | - |
| 0.0082 | 350 | 0.2334 | - |
| 0.0093 | 400 | 0.2189 | - |
| 0.0105 | 450 | 0.2144 | - |
| 0.0117 | 500 | 0.1971 | - |
| 0.0129 | 550 | 0.1565 | - |
| 0.0140 | 600 | 0.0816 | - |
| 0.0152 | 650 | 0.1417 | - |
| 0.0164 | 700 | 0.1051 | - |
| 0.0175 | 750 | 0.0686 | - |
| 0.0187 | 800 | 0.0394 | - |
| 0.0199 | 850 | 0.0947 | - |
| 0.0210 | 900 | 0.0468 | - |
| 0.0222 | 950 | 0.0143 | - |
| 0.0234 | 1000 | 0.0281 | - |
| 0.0245 | 1050 | 0.0329 | - |
| 0.0257 | 1100 | 0.0206 | - |
| 0.0269 | 1150 | 0.0113 | - |
| 0.0280 | 1200 | 0.0054 | - |
| 0.0292 | 1250 | 0.0056 | - |
| 0.0304 | 1300 | 0.0209 | - |
| 0.0315 | 1350 | 0.0064 | - |
| 0.0327 | 1400 | 0.0085 | - |
| 0.0339 | 1450 | 0.0025 | - |
| 0.0350 | 1500 | 0.0031 | - |
| 0.0362 | 1550 | 0.0024 | - |
| 0.0374 | 1600 | 0.0014 | - |
| 0.0386 | 1650 | 0.0019 | - |
| 0.0397 | 1700 | 0.0023 | - |
| 0.0409 | 1750 | 0.0014 | - |
| 0.0421 | 1800 | 0.002 | - |
| 0.0432 | 1850 | 0.001 | - |
| 0.0444 | 1900 | 0.001 | - |
| 0.0456 | 1950 | 0.0019 | - |
| 0.0467 | 2000 | 0.0017 | - |
| 0.0479 | 2050 | 0.001 | - |
| 0.0491 | 2100 | 0.0008 | - |
| 0.0502 | 2150 | 0.0011 | - |
| 0.0514 | 2200 | 0.0006 | - |
| 0.0526 | 2250 | 0.0012 | - |
| 0.0537 | 2300 | 0.0008 | - |
| 0.0549 | 2350 | 0.0014 | - |
| 0.0561 | 2400 | 0.0009 | - |
| 0.0572 | 2450 | 0.0009 | - |
| 0.0584 | 2500 | 0.001 | - |
| 0.0596 | 2550 | 0.0007 | - |
| 0.0607 | 2600 | 0.0007 | - |
| 0.0619 | 2650 | 0.0006 | - |
| 0.0631 | 2700 | 0.0004 | - |
| 0.0643 | 2750 | 0.0007 | - |
| 0.0654 | 2800 | 0.0005 | - |
| 0.0666 | 2850 | 0.0007 | - |
| 0.0678 | 2900 | 0.0007 | - |
| 0.0689 | 2950 | 0.0006 | - |
| 0.0701 | 3000 | 0.0005 | - |
| 0.0713 | 3050 | 0.0007 | - |
| 0.0724 | 3100 | 0.0008 | - |
| 0.0736 | 3150 | 0.0005 | - |
| 0.0748 | 3200 | 0.0005 | - |
| 0.0759 | 3250 | 0.0005 | - |
| 0.0771 | 3300 | 0.0006 | - |
| 0.0783 | 3350 | 0.0006 | - |
| 0.0794 | 3400 | 0.0006 | - |
| 0.0806 | 3450 | 0.0004 | - |
| 0.0818 | 3500 | 0.0005 | - |
| 0.0829 | 3550 | 0.0005 | - |
| 0.0841 | 3600 | 0.0005 | - |
| 0.0853 | 3650 | 0.0005 | - |
| 0.0864 | 3700 | 0.0006 | - |
| 0.0876 | 3750 | 0.0039 | - |
| 0.0888 | 3800 | 0.0004 | - |
| 0.0900 | 3850 | 0.0003 | - |
| 0.0911 | 3900 | 0.0004 | - |
| 0.0923 | 3950 | 0.0007 | - |
| 0.0935 | 4000 | 0.0003 | - |
| 0.0946 | 4050 | 0.0004 | - |
| 0.0958 | 4100 | 0.0003 | - |
| 0.0970 | 4150 | 0.0003 | - |
| 0.0981 | 4200 | 0.0004 | - |
| 0.0993 | 4250 | 0.0003 | - |
| 0.1005 | 4300 | 0.0004 | - |
| 0.1016 | 4350 | 0.0003 | - |
| 0.1028 | 4400 | 0.0004 | - |
| 0.1040 | 4450 | 0.0003 | - |
| 0.1051 | 4500 | 0.0004 | - |
| 0.1063 | 4550 | 0.0003 | - |
| 0.1075 | 4600 | 0.0003 | - |
| 0.1086 | 4650 | 0.0003 | - |
| 0.1098 | 4700 | 0.0003 | - |
| 0.1110 | 4750 | 0.0016 | - |
| 0.1121 | 4800 | 0.0003 | - |
| 0.1133 | 4850 | 0.0002 | - |
| 0.1145 | 4900 | 0.0003 | - |
| 0.1157 | 4950 | 0.0002 | - |
| 0.1168 | 5000 | 0.0003 | - |
| 0.1180 | 5050 | 0.0003 | - |
| 0.1192 | 5100 | 0.0003 | - |
| 0.1203 | 5150 | 0.0002 | - |
| 0.1215 | 5200 | 0.0003 | - |
| 0.1227 | 5250 | 0.0002 | - |
| 0.1238 | 5300 | 0.0178 | - |
| 0.1250 | 5350 | 0.0014 | - |
| 0.1262 | 5400 | 0.002 | - |
| 0.1273 | 5450 | 0.0002 | - |
| 0.1285 | 5500 | 0.0008 | - |
| 0.1297 | 5550 | 0.0003 | - |
| 0.1308 | 5600 | 0.0002 | - |
| 0.1320 | 5650 | 0.0002 | - |
| 0.1332 | 5700 | 0.0002 | - |
| 0.1343 | 5750 | 0.0003 | - |
| 0.1355 | 5800 | 0.0002 | - |
| 0.1367 | 5850 | 0.0003 | - |
| 0.1378 | 5900 | 0.0003 | - |
| 0.1390 | 5950 | 0.0002 | - |
| 0.1402 | 6000 | 0.0002 | - |
| 0.1414 | 6050 | 0.0002 | - |
| 0.1425 | 6100 | 0.0002 | - |
| 0.1437 | 6150 | 0.0002 | - |
| 0.1449 | 6200 | 0.0002 | - |
| 0.1460 | 6250 | 0.0019 | - |
| 0.1472 | 6300 | 0.0005 | - |
| 0.1484 | 6350 | 0.0002 | - |
| 0.1495 | 6400 | 0.0005 | - |
| 0.1507 | 6450 | 0.0003 | - |
| 0.1519 | 6500 | 0.0208 | - |
| 0.1530 | 6550 | 0.0003 | - |
| 0.1542 | 6600 | 0.0002 | - |
| 0.1554 | 6650 | 0.0002 | - |
| 0.1565 | 6700 | 0.0002 | - |
| 0.1577 | 6750 | 0.0002 | - |
| 0.1589 | 6800 | 0.0002 | - |
| 0.1600 | 6850 | 0.0002 | - |
| 0.1612 | 6900 | 0.0104 | - |
| 0.1624 | 6950 | 0.0001 | - |
| 0.1635 | 7000 | 0.0002 | - |
| 0.1647 | 7050 | 0.0002 | - |
| 0.1659 | 7100 | 0.0002 | - |
| 0.1671 | 7150 | 0.0001 | - |
| 0.1682 | 7200 | 0.0002 | - |
| 0.1694 | 7250 | 0.0002 | - |
| 0.1706 | 7300 | 0.0003 | - |
| 0.1717 | 7350 | 0.0002 | - |
| 0.1729 | 7400 | 0.0001 | - |
| 0.1741 | 7450 | 0.0001 | - |
| 0.1752 | 7500 | 0.0002 | - |
| 0.1764 | 7550 | 0.0004 | - |
| 0.1776 | 7600 | 0.0002 | - |
| 0.1787 | 7650 | 0.0005 | - |
| 0.1799 | 7700 | 0.0001 | - |
| 0.1811 | 7750 | 0.0002 | - |
| 0.1822 | 7800 | 0.0002 | - |
| 0.1834 | 7850 | 0.0001 | - |
| 0.1846 | 7900 | 0.0002 | - |
| 0.1857 | 7950 | 0.0002 | - |
| 0.1869 | 8000 | 0.0002 | - |
| 0.1881 | 8050 | 0.0001 | - |
| 0.1892 | 8100 | 0.0002 | - |
| 0.1904 | 8150 | 0.0001 | - |
| 0.1916 | 8200 | 0.0001 | - |
| 0.1928 | 8250 | 0.0001 | - |
| 0.1939 | 8300 | 0.0001 | - |
| 0.1951 | 8350 | 0.0001 | - |
| 0.1963 | 8400 | 0.0002 | - |
| 0.1974 | 8450 | 0.0002 | - |
| 0.1986 | 8500 | 0.0002 | - |
| 0.1998 | 8550 | 0.0002 | - |
| 0.2009 | 8600 | 0.0001 | - |
| 0.2021 | 8650 | 0.0001 | - |
| 0.2033 | 8700 | 0.0001 | - |
| 0.2044 | 8750 | 0.0001 | - |
| 0.2056 | 8800 | 0.0001 | - |
| 0.2068 | 8850 | 0.0001 | - |
| 0.2079 | 8900 | 0.0001 | - |
| 0.2091 | 8950 | 0.0001 | - |
| 0.2103 | 9000 | 0.0001 | - |
| 0.2114 | 9050 | 0.0001 | - |
| 0.2126 | 9100 | 0.0001 | - |
| 0.2138 | 9150 | 0.0001 | - |
| 0.2149 | 9200 | 0.0001 | - |
| 0.2161 | 9250 | 0.0002 | - |
| 0.2173 | 9300 | 0.0001 | - |
| 0.2185 | 9350 | 0.0002 | - |
| 0.2196 | 9400 | 0.0001 | - |
| 0.2208 | 9450 | 0.0001 | - |
| 0.2220 | 9500 | 0.0001 | - |
| 0.2231 | 9550 | 0.0001 | - |
| 0.2243 | 9600 | 0.0001 | - |
| 0.2255 | 9650 | 0.0002 | - |
| 0.2266 | 9700 | 0.0002 | - |
| 0.2278 | 9750 | 0.0001 | - |
| 0.2290 | 9800 | 0.0001 | - |
| 0.2301 | 9850 | 0.0002 | - |
| 0.2313 | 9900 | 0.0001 | - |
| 0.2325 | 9950 | 0.0001 | - |
| 0.2336 | 10000 | 0.0001 | - |
| 0.2348 | 10050 | 0.0001 | - |
| 0.2360 | 10100 | 0.0001 | - |
| 0.2371 | 10150 | 0.0001 | - |
| 0.2383 | 10200 | 0.0001 | - |
| 0.2395 | 10250 | 0.0001 | - |
| 0.2406 | 10300 | 0.0001 | - |
| 0.2418 | 10350 | 0.0001 | - |
| 0.2430 | 10400 | 0.0001 | - |
| 0.2442 | 10450 | 0.0001 | - |
| 0.2453 | 10500 | 0.0001 | - |
| 0.2465 | 10550 | 0.0001 | - |
| 0.2477 | 10600 | 0.0001 | - |
| 0.2488 | 10650 | 0.0001 | - |
| 0.2500 | 10700 | 0.0001 | - |
| 0.2512 | 10750 | 0.0001 | - |
| 0.2523 | 10800 | 0.0001 | - |
| 0.2535 | 10850 | 0.0001 | - |
| 0.2547 | 10900 | 0.0001 | - |
| 0.2558 | 10950 | 0.0001 | - |
| 0.2570 | 11000 | 0.0002 | - |
| 0.2582 | 11050 | 0.0001 | - |
| 0.2593 | 11100 | 0.0003 | - |
| 0.2605 | 11150 | 0.0001 | - |
| 0.2617 | 11200 | 0.0001 | - |
| 0.2628 | 11250 | 0.0001 | - |
| 0.2640 | 11300 | 0.0001 | - |
| 0.2652 | 11350 | 0.0001 | - |
| 0.2663 | 11400 | 0.0001 | - |
| 0.2675 | 11450 | 0.0001 | - |
| 0.2687 | 11500 | 0.0001 | - |
| 0.2699 | 11550 | 0.0001 | - |
| 0.2710 | 11600 | 0.0001 | - |
| 0.2722 | 11650 | 0.0001 | - |
| 0.2734 | 11700 | 0.0001 | - |
| 0.2745 | 11750 | 0.0001 | - |
| 0.2757 | 11800 | 0.0001 | - |
| 0.2769 | 11850 | 0.0001 | - |
| 0.2780 | 11900 | 0.0001 | - |
| 0.2792 | 11950 | 0.0001 | - |
| 0.2804 | 12000 | 0.0001 | - |
| 0.2815 | 12050 | 0.0001 | - |
| 0.2827 | 12100 | 0.0137 | - |
| 0.2839 | 12150 | 0.0001 | - |
| 0.2850 | 12200 | 0.0001 | - |
| 0.2862 | 12250 | 0.0001 | - |
| 0.2874 | 12300 | 0.0001 | - |
| 0.2885 | 12350 | 0.0001 | - |
| 0.2897 | 12400 | 0.0001 | - |
| 0.2909 | 12450 | 0.0001 | - |
| 0.2920 | 12500 | 0.0001 | - |
| 0.2932 | 12550 | 0.0001 | - |
| 0.2944 | 12600 | 0.0001 | - |
| 0.2956 | 12650 | 0.0001 | - |
| 0.2967 | 12700 | 0.0 | - |
| 0.2979 | 12750 | 0.0001 | - |
| 0.2991 | 12800 | 0.0001 | - |
| 0.3002 | 12850 | 0.0001 | - |
| 0.3014 | 12900 | 0.0001 | - |
| 0.3026 | 12950 | 0.0001 | - |
| 0.3037 | 13000 | 0.0001 | - |
| 0.3049 | 13050 | 0.0001 | - |
| 0.3061 | 13100 | 0.0001 | - |
| 0.3072 | 13150 | 0.0001 | - |
| 0.3084 | 13200 | 0.0001 | - |
| 0.3096 | 13250 | 0.0001 | - |
| 0.3107 | 13300 | 0.0001 | - |
| 0.3119 | 13350 | 0.0001 | - |
| 0.3131 | 13400 | 0.0001 | - |
| 0.3142 | 13450 | 0.0001 | - |
| 0.3154 | 13500 | 0.0001 | - |
| 0.3166 | 13550 | 0.0001 | - |
| 0.3177 | 13600 | 0.0001 | - |
| 0.3189 | 13650 | 0.0001 | - |
| 0.3201 | 13700 | 0.0001 | - |
| 0.3213 | 13750 | 0.0001 | - |
| 0.3224 | 13800 | 0.0001 | - |
| 0.3236 | 13850 | 0.0 | - |
| 0.3248 | 13900 | 0.0001 | - |
| 0.3259 | 13950 | 0.0001 | - |
| 0.3271 | 14000 | 0.0001 | - |
| 0.3283 | 14050 | 0.0002 | - |
| 0.3294 | 14100 | 0.0001 | - |
| 0.3306 | 14150 | 0.0001 | - |
| 0.3318 | 14200 | 0.0001 | - |
| 0.3329 | 14250 | 0.0001 | - |
| 0.3341 | 14300 | 0.0001 | - |
| 0.3353 | 14350 | 0.0001 | - |
| 0.3364 | 14400 | 0.0001 | - |
| 0.3376 | 14450 | 0.0001 | - |
| 0.3388 | 14500 | 0.0001 | - |
| 0.3399 | 14550 | 0.0001 | - |
| 0.3411 | 14600 | 0.0001 | - |
| 0.3423 | 14650 | 0.0001 | - |
| 0.3434 | 14700 | 0.0001 | - |
| 0.3446 | 14750 | 0.0001 | - |
| 0.3458 | 14800 | 0.0001 | - |
| 0.3470 | 14850 | 0.0001 | - |
| 0.3481 | 14900 | 0.0001 | - |
| 0.3493 | 14950 | 0.0 | - |
| 0.3505 | 15000 | 0.0001 | - |
| 0.3516 | 15050 | 0.0001 | - |
| 0.3528 | 15100 | 0.0 | - |
| 0.3540 | 15150 | 0.0001 | - |
| 0.3551 | 15200 | 0.0001 | - |
| 0.3563 | 15250 | 0.0001 | - |
| 0.3575 | 15300 | 0.0001 | - |
| 0.3586 | 15350 | 0.0001 | - |
| 0.3598 | 15400 | 0.0001 | - |
| 0.3610 | 15450 | 0.0001 | - |
| 0.3621 | 15500 | 0.0001 | - |
| 0.3633 | 15550 | 0.0001 | - |
| 0.3645 | 15600 | 0.0002 | - |
| 0.3656 | 15650 | 0.0001 | - |
| 0.3668 | 15700 | 0.0001 | - |
| 0.3680 | 15750 | 0.0001 | - |
| 0.3692 | 15800 | 0.0001 | - |
| 0.3703 | 15850 | 0.0001 | - |
| 0.3715 | 15900 | 0.0001 | - |
| 0.3727 | 15950 | 0.0 | - |
| 0.3738 | 16000 | 0.0 | - |
| 0.3750 | 16050 | 0.0 | - |
| 0.3762 | 16100 | 0.0 | - |
| 0.3773 | 16150 | 0.0001 | - |
| 0.3785 | 16200 | 0.0001 | - |
| 0.3797 | 16250 | 0.0001 | - |
| 0.3808 | 16300 | 0.0001 | - |
| 0.3820 | 16350 | 0.0001 | - |
| 0.3832 | 16400 | 0.0001 | - |
| 0.3843 | 16450 | 0.0 | - |
| 0.3855 | 16500 | 0.0001 | - |
| 0.3867 | 16550 | 0.0 | - |
| 0.3878 | 16600 | 0.0001 | - |
| 0.3890 | 16650 | 0.0001 | - |
| 0.3902 | 16700 | 0.0001 | - |
| 0.3913 | 16750 | 0.0001 | - |
| 0.3925 | 16800 | 0.0002 | - |
| 0.3937 | 16850 | 0.0002 | - |
| 0.3949 | 16900 | 0.0 | - |
| 0.3960 | 16950 | 0.0 | - |
| 0.3972 | 17000 | 0.0 | - |
| 0.3984 | 17050 | 0.0001 | - |
| 0.3995 | 17100 | 0.0001 | - |
| 0.4007 | 17150 | 0.0001 | - |
| 0.4019 | 17200 | 0.0001 | - |
| 0.4030 | 17250 | 0.0 | - |
| 0.4042 | 17300 | 0.0 | - |
| 0.4054 | 17350 | 0.0279 | - |
| 0.4065 | 17400 | 0.0 | - |
| 0.4077 | 17450 | 0.0 | - |
| 0.4089 | 17500 | 0.0 | - |
| 0.4100 | 17550 | 0.0 | - |
| 0.4112 | 17600 | 0.0001 | - |
| 0.4124 | 17650 | 0.0 | - |
| 0.4135 | 17700 | 0.028 | - |
| 0.4147 | 17750 | 0.0 | - |
| 0.4159 | 17800 | 0.0 | - |
| 0.4170 | 17850 | 0.0 | - |
| 0.4182 | 17900 | 0.0 | - |
| 0.4194 | 17950 | 0.0001 | - |
| 0.4206 | 18000 | 0.0 | - |
| 0.4217 | 18050 | 0.0 | - |
| 0.4229 | 18100 | 0.0001 | - |
| 0.4241 | 18150 | 0.0 | - |
| 0.4252 | 18200 | 0.0 | - |
| 0.4264 | 18250 | 0.0 | - |
| 0.4276 | 18300 | 0.0 | - |
| 0.4287 | 18350 | 0.0 | - |
| 0.4299 | 18400 | 0.0 | - |
| 0.4311 | 18450 | 0.0001 | - |
| 0.4322 | 18500 | 0.0001 | - |
| 0.4334 | 18550 | 0.0001 | - |
| 0.4346 | 18600 | 0.0001 | - |
| 0.4357 | 18650 | 0.0 | - |
| 0.4369 | 18700 | 0.0 | - |
| 0.4381 | 18750 | 0.0001 | - |
| 0.4392 | 18800 | 0.0001 | - |
| 0.4404 | 18850 | 0.0 | - |
| 0.4416 | 18900 | 0.0001 | - |
| 0.4427 | 18950 | 0.0001 | - |
| 0.4439 | 19000 | 0.0 | - |
| 0.4451 | 19050 | 0.0 | - |
| 0.4463 | 19100 | 0.0001 | - |
| 0.4474 | 19150 | 0.0 | - |
| 0.4486 | 19200 | 0.0001 | - |
| 0.4498 | 19250 | 0.0 | - |
| 0.4509 | 19300 | 0.0001 | - |
| 0.4521 | 19350 | 0.0001 | - |
| 0.4533 | 19400 | 0.0001 | - |
| 0.4544 | 19450 | 0.0 | - |
| 0.4556 | 19500 | 0.0001 | - |
| 0.4568 | 19550 | 0.0001 | - |
| 0.4579 | 19600 | 0.0001 | - |
| 0.4591 | 19650 | 0.0001 | - |
| 0.4603 | 19700 | 0.0001 | - |
| 0.4614 | 19750 | 0.0001 | - |
| 0.4626 | 19800 | 0.0 | - |
| 0.4638 | 19850 | 0.0 | - |
| 0.4649 | 19900 | 0.0001 | - |
| 0.4661 | 19950 | 0.0 | - |
| 0.4673 | 20000 | 0.0 | - |
| 0.4684 | 20050 | 0.0 | - |
| 0.4696 | 20100 | 0.0 | - |
| 0.4708 | 20150 | 0.0 | - |
| 0.4720 | 20200 | 0.0 | - |
| 0.4731 | 20250 | 0.0 | - |
| 0.4743 | 20300 | 0.0001 | - |
| 0.4755 | 20350 | 0.0001 | - |
| 0.4766 | 20400 | 0.0001 | - |
| 0.4778 | 20450 | 0.0 | - |
| 0.4790 | 20500 | 0.0 | - |
| 0.4801 | 20550 | 0.0001 | - |
| 0.4813 | 20600 | 0.0 | - |
| 0.4825 | 20650 | 0.0005 | - |
| 0.4836 | 20700 | 0.0001 | - |
| 0.4848 | 20750 | 0.0001 | - |
| 0.4860 | 20800 | 0.0 | - |
| 0.4871 | 20850 | 0.0001 | - |
| 0.4883 | 20900 | 0.0001 | - |
| 0.4895 | 20950 | 0.0 | - |
| 0.4906 | 21000 | 0.0 | - |
| 0.4918 | 21050 | 0.0 | - |
| 0.4930 | 21100 | 0.0 | - |
| 0.4941 | 21150 | 0.0001 | - |
| 0.4953 | 21200 | 0.0 | - |
| 0.4965 | 21250 | 0.0001 | - |
| 0.4977 | 21300 | 0.0 | - |
| 0.4988 | 21350 | 0.0001 | - |
| 0.5000 | 21400 | 0.0001 | - |
| 0.5012 | 21450 | 0.0 | - |
| 0.5023 | 21500 | 0.0 | - |
| 0.5035 | 21550 | 0.0 | - |
| 0.5047 | 21600 | 0.0001 | - |
| 0.5058 | 21650 | 0.0 | - |
| 0.5070 | 21700 | 0.0 | - |
| 0.5082 | 21750 | 0.0 | - |
| 0.5093 | 21800 | 0.0 | - |
| 0.5105 | 21850 | 0.0 | - |
| 0.5117 | 21900 | 0.0001 | - |
| 0.5128 | 21950 | 0.0 | - |
| 0.5140 | 22000 | 0.0 | - |
| 0.5152 | 22050 | 0.0 | - |
| 0.5163 | 22100 | 0.0 | - |
| 0.5175 | 22150 | 0.0 | - |
| 0.5187 | 22200 | 0.0001 | - |
| 0.5198 | 22250 | 0.0 | - |
| 0.5210 | 22300 | 0.0001 | - |
| 0.5222 | 22350 | 0.0 | - |
| 0.5234 | 22400 | 0.0001 | - |
| 0.5245 | 22450 | 0.0001 | - |
| 0.5257 | 22500 | 0.0 | - |
| 0.5269 | 22550 | 0.0 | - |
| 0.5280 | 22600 | 0.0 | - |
| 0.5292 | 22650 | 0.0 | - |
| 0.5304 | 22700 | 0.0 | - |
| 0.5315 | 22750 | 0.0 | - |
| 0.5327 | 22800 | 0.0 | - |
| 0.5339 | 22850 | 0.0 | - |
| 0.5350 | 22900 | 0.0001 | - |
| 0.5362 | 22950 | 0.0 | - |
| 0.5374 | 23000 | 0.0 | - |
| 0.5385 | 23050 | 0.0001 | - |
| 0.5397 | 23100 | 0.0 | - |
| 0.5409 | 23150 | 0.0 | - |
| 0.5420 | 23200 | 0.0001 | - |
| 0.5432 | 23250 | 0.0 | - |
| 0.5444 | 23300 | 0.0001 | - |
| 0.5455 | 23350 | 0.0001 | - |
| 0.5467 | 23400 | 0.0 | - |
| 0.5479 | 23450 | 0.0 | - |
| 0.5491 | 23500 | 0.0001 | - |
| 0.5502 | 23550 | 0.0 | - |
| 0.5514 | 23600 | 0.0001 | - |
| 0.5526 | 23650 | 0.0 | - |
| 0.5537 | 23700 | 0.0 | - |
| 0.5549 | 23750 | 0.0001 | - |
| 0.5561 | 23800 | 0.0 | - |
| 0.5572 | 23850 | 0.0 | - |
| 0.5584 | 23900 | 0.0 | - |
| 0.5596 | 23950 | 0.0 | - |
| 0.5607 | 24000 | 0.0 | - |
| 0.5619 | 24050 | 0.0 | - |
| 0.5631 | 24100 | 0.0001 | - |
| 0.5642 | 24150 | 0.0001 | - |
| 0.5654 | 24200 | 0.0 | - |
| 0.5666 | 24250 | 0.0 | - |
| 0.5677 | 24300 | 0.0001 | - |
| 0.5689 | 24350 | 0.0 | - |
| 0.5701 | 24400 | 0.0001 | - |
| 0.5712 | 24450 | 0.0 | - |
| 0.5724 | 24500 | 0.0 | - |
| 0.5736 | 24550 | 0.0 | - |
| 0.5748 | 24600 | 0.0029 | - |
| 0.5759 | 24650 | 0.0 | - |
| 0.5771 | 24700 | 0.0 | - |
| 0.5783 | 24750 | 0.0 | - |
| 0.5794 | 24800 | 0.0 | - |
| 0.5806 | 24850 | 0.0 | - |
| 0.5818 | 24900 | 0.0 | - |
| 0.5829 | 24950 | 0.0001 | - |
| 0.5841 | 25000 | 0.0 | - |
| 0.5853 | 25050 | 0.0 | - |
| 0.5864 | 25100 | 0.0001 | - |
| 0.5876 | 25150 | 0.0 | - |
| 0.5888 | 25200 | 0.0 | - |
| 0.5899 | 25250 | 0.0 | - |
| 0.5911 | 25300 | 0.0001 | - |
| 0.5923 | 25350 | 0.0 | - |
| 0.5934 | 25400 | 0.0001 | - |
| 0.5946 | 25450 | 0.0 | - |
| 0.5958 | 25500 | 0.0 | - |
| 0.5969 | 25550 | 0.0 | - |
| 0.5981 | 25600 | 0.0 | - |
| 0.5993 | 25650 | 0.0 | - |
| 0.6005 | 25700 | 0.0 | - |
| 0.6016 | 25750 | 0.0 | - |
| 0.6028 | 25800 | 0.0 | - |
| 0.6040 | 25850 | 0.0 | - |
| 0.6051 | 25900 | 0.0 | - |
| 0.6063 | 25950 | 0.0 | - |
| 0.6075 | 26000 | 0.0 | - |
| 0.6086 | 26050 | 0.0 | - |
| 0.6098 | 26100 | 0.0 | - |
| 0.6110 | 26150 | 0.0 | - |
| 0.6121 | 26200 | 0.0 | - |
| 0.6133 | 26250 | 0.0 | - |
| 0.6145 | 26300 | 0.0 | - |
| 0.6156 | 26350 | 0.0001 | - |
| 0.6168 | 26400 | 0.0 | - |
| 0.6180 | 26450 | 0.0 | - |
| 0.6191 | 26500 | 0.0 | - |
| 0.6203 | 26550 | 0.0 | - |
| 0.6215 | 26600 | 0.0001 | - |
| 0.6226 | 26650 | 0.0 | - |
| 0.6238 | 26700 | 0.0 | - |
| 0.6250 | 26750 | 0.0 | - |
| 0.6262 | 26800 | 0.0 | - |
| 0.6273 | 26850 | 0.0 | - |
| 0.6285 | 26900 | 0.0 | - |
| 0.6297 | 26950 | 0.0 | - |
| 0.6308 | 27000 | 0.0 | - |
| 0.6320 | 27050 | 0.0001 | - |
| 0.6332 | 27100 | 0.0 | - |
| 0.6343 | 27150 | 0.0 | - |
| 0.6355 | 27200 | 0.0 | - |
| 0.6367 | 27250 | 0.0001 | - |
| 0.6378 | 27300 | 0.0 | - |
| 0.6390 | 27350 | 0.0 | - |
| 0.6402 | 27400 | 0.0 | - |
| 0.6413 | 27450 | 0.0 | - |
| 0.6425 | 27500 | 0.0 | - |
| 0.6437 | 27550 | 0.0 | - |
| 0.6448 | 27600 | 0.0001 | - |
| 0.6460 | 27650 | 0.0001 | - |
| 0.6472 | 27700 | 0.0 | - |
| 0.6483 | 27750 | 0.0 | - |
| 0.6495 | 27800 | 0.0 | - |
| 0.6507 | 27850 | 0.0 | - |
| 0.6519 | 27900 | 0.0 | - |
| 0.6530 | 27950 | 0.0 | - |
| 0.6542 | 28000 | 0.0 | - |
| 0.6554 | 28050 | 0.0 | - |
| 0.6565 | 28100 | 0.0 | - |
| 0.6577 | 28150 | 0.0 | - |
| 0.6589 | 28200 | 0.0 | - |
| 0.6600 | 28250 | 0.0 | - |
| 0.6612 | 28300 | 0.0 | - |
| 0.6624 | 28350 | 0.0 | - |
| 0.6635 | 28400 | 0.0 | - |
| 0.6647 | 28450 | 0.0 | - |
| 0.6659 | 28500 | 0.0 | - |
| 0.6670 | 28550 | 0.0 | - |
| 0.6682 | 28600 | 0.0001 | - |
| 0.6694 | 28650 | 0.0 | - |
| 0.6705 | 28700 | 0.0 | - |
| 0.6717 | 28750 | 0.0 | - |
| 0.6729 | 28800 | 0.0 | - |
| 0.6740 | 28850 | 0.0 | - |
| 0.6752 | 28900 | 0.0 | - |
| 0.6764 | 28950 | 0.0 | - |
| 0.6776 | 29000 | 0.0 | - |
| 0.6787 | 29050 | 0.0 | - |
| 0.6799 | 29100 | 0.0 | - |
| 0.6811 | 29150 | 0.0001 | - |
| 0.6822 | 29200 | 0.0 | - |
| 0.6834 | 29250 | 0.0 | - |
| 0.6846 | 29300 | 0.0 | - |
| 0.6857 | 29350 | 0.0 | - |
| 0.6869 | 29400 | 0.0 | - |
| 0.6881 | 29450 | 0.0 | - |
| 0.6892 | 29500 | 0.0 | - |
| 0.6904 | 29550 | 0.0 | - |
| 0.6916 | 29600 | 0.0 | - |
| 0.6927 | 29650 | 0.0 | - |
| 0.6939 | 29700 | 0.0 | - |
| 0.6951 | 29750 | 0.0 | - |
| 0.6962 | 29800 | 0.0 | - |
| 0.6974 | 29850 | 0.0 | - |
| 0.6986 | 29900 | 0.0 | - |
| 0.6998 | 29950 | 0.0 | - |
| 0.7009 | 30000 | 0.0 | - |
| 0.7021 | 30050 | 0.0 | - |
| 0.7033 | 30100 | 0.0 | - |
| 0.7044 | 30150 | 0.0 | - |
| 0.7056 | 30200 | 0.0 | - |
| 0.7068 | 30250 | 0.0 | - |
| 0.7079 | 30300 | 0.0 | - |
| 0.7091 | 30350 | 0.0 | - |
| 0.7103 | 30400 | 0.0 | - |
| 0.7114 | 30450 | 0.0 | - |
| 0.7126 | 30500 | 0.0 | - |
| 0.7138 | 30550 | 0.0 | - |
| 0.7149 | 30600 | 0.0 | - |
| 0.7161 | 30650 | 0.0 | - |
| 0.7173 | 30700 | 0.0 | - |
| 0.7184 | 30750 | 0.0 | - |
| 0.7196 | 30800 | 0.0 | - |
| 0.7208 | 30850 | 0.0001 | - |
| 0.7219 | 30900 | 0.0 | - |
| 0.7231 | 30950 | 0.0 | - |
| 0.7243 | 31000 | 0.0 | - |
| 0.7255 | 31050 | 0.0 | - |
| 0.7266 | 31100 | 0.0 | - |
| 0.7278 | 31150 | 0.0 | - |
| 0.7290 | 31200 | 0.0 | - |
| 0.7301 | 31250 | 0.0 | - |
| 0.7313 | 31300 | 0.0 | - |
| 0.7325 | 31350 | 0.0 | - |
| 0.7336 | 31400 | 0.0 | - |
| 0.7348 | 31450 | 0.0 | - |
| 0.7360 | 31500 | 0.0 | - |
| 0.7371 | 31550 | 0.0 | - |
| 0.7383 | 31600 | 0.0001 | - |
| 0.7395 | 31650 | 0.0001 | - |
| 0.7406 | 31700 | 0.0 | - |
| 0.7418 | 31750 | 0.0 | - |
| 0.7430 | 31800 | 0.0 | - |
| 0.7441 | 31850 | 0.0 | - |
| 0.7453 | 31900 | 0.0 | - |
| 0.7465 | 31950 | 0.0 | - |
| 0.7476 | 32000 | 0.0 | - |
| 0.7488 | 32050 | 0.0 | - |
| 0.7500 | 32100 | 0.0 | - |
| 0.7512 | 32150 | 0.0 | - |
| 0.7523 | 32200 | 0.0 | - |
| 0.7535 | 32250 | 0.0 | - |
| 0.7547 | 32300 | 0.0 | - |
| 0.7558 | 32350 | 0.0 | - |
| 0.7570 | 32400 | 0.0 | - |
| 0.7582 | 32450 | 0.0 | - |
| 0.7593 | 32500 | 0.0 | - |
| 0.7605 | 32550 | 0.0 | - |
| 0.7617 | 32600 | 0.0 | - |
| 0.7628 | 32650 | 0.0 | - |
| 0.7640 | 32700 | 0.0 | - |
| 0.7652 | 32750 | 0.0 | - |
| 0.7663 | 32800 | 0.0 | - |
| 0.7675 | 32850 | 0.0 | - |
| 0.7687 | 32900 | 0.0 | - |
| 0.7698 | 32950 | 0.0 | - |
| 0.7710 | 33000 | 0.0 | - |
| 0.7722 | 33050 | 0.0 | - |
| 0.7733 | 33100 | 0.0 | - |
| 0.7745 | 33150 | 0.0 | - |
| 0.7757 | 33200 | 0.0 | - |
| 0.7769 | 33250 | 0.0 | - |
| 0.7780 | 33300 | 0.0 | - |
| 0.7792 | 33350 | 0.0 | - |
| 0.7804 | 33400 | 0.0 | - |
| 0.7815 | 33450 | 0.0 | - |
| 0.7827 | 33500 | 0.0 | - |
| 0.7839 | 33550 | 0.0 | - |
| 0.7850 | 33600 | 0.0 | - |
| 0.7862 | 33650 | 0.0 | - |
| 0.7874 | 33700 | 0.0001 | - |
| 0.7885 | 33750 | 0.0 | - |
| 0.7897 | 33800 | 0.0 | - |
| 0.7909 | 33850 | 0.0 | - |
| 0.7920 | 33900 | 0.0 | - |
| 0.7932 | 33950 | 0.0 | - |
| 0.7944 | 34000 | 0.0 | - |
| 0.7955 | 34050 | 0.0 | - |
| 0.7967 | 34100 | 0.0 | - |
| 0.7979 | 34150 | 0.0 | - |
| 0.7990 | 34200 | 0.0 | - |
| 0.8002 | 34250 | 0.0 | - |
| 0.8014 | 34300 | 0.0 | - |
| 0.8026 | 34350 | 0.0 | - |
| 0.8037 | 34400 | 0.0 | - |
| 0.8049 | 34450 | 0.0 | - |
| 0.8061 | 34500 | 0.0 | - |
| 0.8072 | 34550 | 0.0 | - |
| 0.8084 | 34600 | 0.0 | - |
| 0.8096 | 34650 | 0.0 | - |
| 0.8107 | 34700 | 0.0 | - |
| 0.8119 | 34750 | 0.0 | - |
| 0.8131 | 34800 | 0.0 | - |
| 0.8142 | 34850 | 0.0 | - |
| 0.8154 | 34900 | 0.0 | - |
| 0.8166 | 34950 | 0.0 | - |
| 0.8177 | 35000 | 0.0 | - |
| 0.8189 | 35050 | 0.0 | - |
| 0.8201 | 35100 | 0.0 | - |
| 0.8212 | 35150 | 0.0 | - |
| 0.8224 | 35200 | 0.0 | - |
| 0.8236 | 35250 | 0.0 | - |
| 0.8247 | 35300 | 0.0 | - |
| 0.8259 | 35350 | 0.0 | - |
| 0.8271 | 35400 | 0.0 | - |
| 0.8283 | 35450 | 0.0 | - |
| 0.8294 | 35500 | 0.0 | - |
| 0.8306 | 35550 | 0.0 | - |
| 0.8318 | 35600 | 0.0 | - |
| 0.8329 | 35650 | 0.0 | - |
| 0.8341 | 35700 | 0.0 | - |
| 0.8353 | 35750 | 0.0 | - |
| 0.8364 | 35800 | 0.0 | - |
| 0.8376 | 35850 | 0.0 | - |
| 0.8388 | 35900 | 0.0 | - |
| 0.8399 | 35950 | 0.0 | - |
| 0.8411 | 36000 | 0.0 | - |
| 0.8423 | 36050 | 0.0 | - |
| 0.8434 | 36100 | 0.0 | - |
| 0.8446 | 36150 | 0.0 | - |
| 0.8458 | 36200 | 0.0 | - |
| 0.8469 | 36250 | 0.0 | - |
| 0.8481 | 36300 | 0.0 | - |
| 0.8493 | 36350 | 0.0 | - |
| 0.8504 | 36400 | 0.0 | - |
| 0.8516 | 36450 | 0.0 | - |
| 0.8528 | 36500 | 0.0 | - |
| 0.8540 | 36550 | 0.0 | - |
| 0.8551 | 36600 | 0.0 | - |
| 0.8563 | 36650 | 0.0 | - |
| 0.8575 | 36700 | 0.0 | - |
| 0.8586 | 36750 | 0.0 | - |
| 0.8598 | 36800 | 0.0 | - |
| 0.8610 | 36850 | 0.0 | - |
| 0.8621 | 36900 | 0.0 | - |
| 0.8633 | 36950 | 0.0 | - |
| 0.8645 | 37000 | 0.0 | - |
| 0.8656 | 37050 | 0.0 | - |
| 0.8668 | 37100 | 0.0 | - |
| 0.8680 | 37150 | 0.0 | - |
| 0.8691 | 37200 | 0.0 | - |
| 0.8703 | 37250 | 0.0 | - |
| 0.8715 | 37300 | 0.0 | - |
| 0.8726 | 37350 | 0.0 | - |
| 0.8738 | 37400 | 0.0 | - |
| 0.8750 | 37450 | 0.0 | - |
| 0.8761 | 37500 | 0.0 | - |
| 0.8773 | 37550 | 0.0 | - |
| 0.8785 | 37600 | 0.0 | - |
| 0.8797 | 37650 | 0.0 | - |
| 0.8808 | 37700 | 0.0 | - |
| 0.8820 | 37750 | 0.0 | - |
| 0.8832 | 37800 | 0.0 | - |
| 0.8843 | 37850 | 0.0 | - |
| 0.8855 | 37900 | 0.0 | - |
| 0.8867 | 37950 | 0.0 | - |
| 0.8878 | 38000 | 0.0 | - |
| 0.8890 | 38050 | 0.0 | - |
| 0.8902 | 38100 | 0.0 | - |
| 0.8913 | 38150 | 0.0 | - |
| 0.8925 | 38200 | 0.0 | - |
| 0.8937 | 38250 | 0.0 | - |
| 0.8948 | 38300 | 0.0 | - |
| 0.8960 | 38350 | 0.0 | - |
| 0.8972 | 38400 | 0.0 | - |
| 0.8983 | 38450 | 0.0 | - |
| 0.8995 | 38500 | 0.0 | - |
| 0.9007 | 38550 | 0.0 | - |
| 0.9018 | 38600 | 0.0 | - |
| 0.9030 | 38650 | 0.0 | - |
| 0.9042 | 38700 | 0.0 | - |
| 0.9054 | 38750 | 0.0 | - |
| 0.9065 | 38800 | 0.0 | - |
| 0.9077 | 38850 | 0.0 | - |
| 0.9089 | 38900 | 0.0 | - |
| 0.9100 | 38950 | 0.0 | - |
| 0.9112 | 39000 | 0.0 | - |
| 0.9124 | 39050 | 0.0 | - |
| 0.9135 | 39100 | 0.0 | - |
| 0.9147 | 39150 | 0.0 | - |
| 0.9159 | 39200 | 0.0 | - |
| 0.9170 | 39250 | 0.0 | - |
| 0.9182 | 39300 | 0.0 | - |
| 0.9194 | 39350 | 0.0 | - |
| 0.9205 | 39400 | 0.0 | - |
| 0.9217 | 39450 | 0.0 | - |
| 0.9229 | 39500 | 0.0 | - |
| 0.9240 | 39550 | 0.0 | - |
| 0.9252 | 39600 | 0.0 | - |
| 0.9264 | 39650 | 0.0 | - |
| 0.9275 | 39700 | 0.0 | - |
| 0.9287 | 39750 | 0.0 | - |
| 0.9299 | 39800 | 0.0 | - |
| 0.9311 | 39850 | 0.0 | - |
| 0.9322 | 39900 | 0.0 | - |
| 0.9334 | 39950 | 0.0 | - |
| 0.9346 | 40000 | 0.0 | - |
| 0.9357 | 40050 | 0.0 | - |
| 0.9369 | 40100 | 0.0 | - |
| 0.9381 | 40150 | 0.0 | - |
| 0.9392 | 40200 | 0.0 | - |
| 0.9404 | 40250 | 0.0 | - |
| 0.9416 | 40300 | 0.0001 | - |
| 0.9427 | 40350 | 0.0 | - |
| 0.9439 | 40400 | 0.0 | - |
| 0.9451 | 40450 | 0.0 | - |
| 0.9462 | 40500 | 0.0 | - |
| 0.9474 | 40550 | 0.0 | - |
| 0.9486 | 40600 | 0.0 | - |
| 0.9497 | 40650 | 0.0 | - |
| 0.9509 | 40700 | 0.0 | - |
| 0.9521 | 40750 | 0.0 | - |
| 0.9532 | 40800 | 0.0 | - |
| 0.9544 | 40850 | 0.0 | - |
| 0.9556 | 40900 | 0.0 | - |
| 0.9568 | 40950 | 0.0 | - |
| 0.9579 | 41000 | 0.0 | - |
| 0.9591 | 41050 | 0.0 | - |
| 0.9603 | 41100 | 0.0 | - |
| 0.9614 | 41150 | 0.0 | - |
| 0.9626 | 41200 | 0.0 | - |
| 0.9638 | 41250 | 0.0 | - |
| 0.9649 | 41300 | 0.0 | - |
| 0.9661 | 41350 | 0.0 | - |
| 0.9673 | 41400 | 0.0 | - |
| 0.9684 | 41450 | 0.0 | - |
| 0.9696 | 41500 | 0.0 | - |
| 0.9708 | 41550 | 0.0 | - |
| 0.9719 | 41600 | 0.0 | - |
| 0.9731 | 41650 | 0.0 | - |
| 0.9743 | 41700 | 0.0 | - |
| 0.9754 | 41750 | 0.0 | - |
| 0.9766 | 41800 | 0.0 | - |
| 0.9778 | 41850 | 0.0 | - |
| 0.9789 | 41900 | 0.0 | - |
| 0.9801 | 41950 | 0.0 | - |
| 0.9813 | 42000 | 0.0 | - |
| 0.9825 | 42050 | 0.0 | - |
| 0.9836 | 42100 | 0.0 | - |
| 0.9848 | 42150 | 0.0 | - |
| 0.9860 | 42200 | 0.0 | - |
| 0.9871 | 42250 | 0.0 | - |
| 0.9883 | 42300 | 0.0 | - |
| 0.9895 | 42350 | 0.0 | - |
| 0.9906 | 42400 | 0.0 | - |
| 0.9918 | 42450 | 0.0 | - |
| 0.9930 | 42500 | 0.0 | - |
| 0.9941 | 42550 | 0.0 | - |
| 0.9953 | 42600 | 0.0 | - |
| 0.9965 | 42650 | 0.0 | - |
| 0.9976 | 42700 | 0.0 | - |
| 0.9988 | 42750 | 0.0 | - |
| 1.0000 | 42800 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.5.1
- Transformers: 4.38.1
- PyTorch: 2.1.0+cu121
- Datasets: 2.18.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
ramo6627/gemma-Code-Instruct-Finetune-test
|
ramo6627
| 2024-03-05T23:16:28Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-05T23:14:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rdp99/distilbert-base-uncased-finetuned-emotion
|
rdp99
| 2024-03-05T23:16:17Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-05T21:46:40Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2834
- Accuracy: 0.8853
- F1: 0.8853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4378 | 1.0 | 109 | 0.2883 | 0.8819 | 0.8819 |
| 0.2536 | 2.0 | 218 | 0.2834 | 0.8853 | 0.8853 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
GIZ/SECTOR-multilabel-climatebert_f
|
GIZ
| 2024-03-05T23:09:55Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:GIZ/policy_classification",
"base_model:climatebert/distilroberta-base-climate-f",
"base_model:finetune:climatebert/distilroberta-base-climate-f",
"license:apache-2.0",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-05T22:35:53Z |
---
license: apache-2.0
base_model: climatebert/distilroberta-base-climate-f
tags:
- generated_from_trainer
model-index:
- name: SECTOR-multilabel-climatebert
results: []
datasets:
- GIZ/policy_classification
co2_eq_emissions:
emissions: 28.6797414394632
source: codecarbon
training_type: fine-tuning
on_cloud: true
cpu_model: Intel(R) Xeon(R) CPU @ 2.00GHz
ram_total_size: 12.6747894287109
hours_used: 0.706
hardware_used: 1 x Tesla T4
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SECTOR-multilabel-climatebert
This model is a fine-tuned version of [climatebert/distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) on the [Policy-Classification](https://huggingface.co/datasets/GIZ/policy_classification) dataset.
*The loss function BCEWithLogitsLoss is modified with pos_weight to focus on recall, therefore instead of loss the evaluation metrics are used to assess the model performance during training*
It achieves the following results on the evaluation set:
- Loss: 0.6028
- Precision-micro: 0.6395
- Precision-samples: 0.7543
- Precision-weighted: 0.6475
- Recall-micro: 0.7762
- Recall-samples: 0.8583
- Recall-weighted: 0.7762
- F1-micro: 0.7012
- F1-samples: 0.7655
- F1-weighted: 0.7041
## Model description
The purpose of this model is to predict multiple labels simultaneously from a given input data. Specifically, the model will predict Sector labels - Agriculture,Buildings,
Coastal Zone,Cross-Cutting Area,Disaster Risk Management (DRM),Economy-wide,Education,Energy,Environment,Health,Industries,LULUCF/Forestry,Social Development,Tourism,
Transport,Urban,Waste,Water
## Intended uses & limitations
More information needed
## Training and evaluation data
- Training Dataset: 10123
| Class | Positive Count of Class|
|:-------------|:--------|
| Agriculture | 2235 |
| Buildings | 169 |
| Coastal Zone | 698|
| Cross-Cutting Area | 1853 |
| Disaster Risk Management (DRM) | 814 |
| Economy-wide | 873 |
| Education | 180|
| Energy | 2847 |
| Environment | 905 |
| Health | 662|
| Industries | 419 |
| LULUCF/Forestry | 1861|
| Social Development | 507 |
| Tourism | 192 |
| Transport | 1173|
| Urban | 558 |
| Waste | 714|
| Water | 1207 |
- Validation Dataset: 936
| Class | Positive Count of Class|
|:-------------|:--------|
| Agriculture | 200 |
| Buildings | 18 |
| Coastal Zone | 71|
| Cross-Cutting Area | 180 |
| Disaster Risk Management (DRM) | 85 |
| Economy-wide | 85 |
| Education | 23|
| Energy | 254 |
| Environment | 91 |
| Health | 68|
| Industries | 41 |
| LULUCF/Forestry | 193|
| Social Development | 56 |
| Tourism | 28 |
| Transport | 107|
| Urban | 51 |
| Waste | 59|
| Water | 106 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.07e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 300
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision-micro | Precision-samples | Precision-weighted | Recall-micro | Recall-samples | Recall-weighted | F1-micro | F1-samples | F1-weighted |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|:-----------------:|:------------------:|:------------:|:--------------:|:---------------:|:--------:|:----------:|:-----------:|
| 0.6978 | 1.0 | 633 | 0.5968 | 0.3948 | 0.5274 | 0.4982 | 0.7873 | 0.8675 | 0.7873 | 0.5259 | 0.5996 | 0.5793 |
| 0.485 | 2.0 | 1266 | 0.5255 | 0.5089 | 0.6365 | 0.5469 | 0.7984 | 0.8749 | 0.7984 | 0.6216 | 0.6907 | 0.6384 |
| 0.3657 | 3.0 | 1899 | 0.5248 | 0.4984 | 0.6617 | 0.5397 | 0.8141 | 0.8769 | 0.8141 | 0.6183 | 0.7066 | 0.6393 |
| 0.2585 | 4.0 | 2532 | 0.5457 | 0.5807 | 0.7148 | 0.5992 | 0.8007 | 0.8752 | 0.8007 | 0.6732 | 0.7449 | 0.6813 |
| 0.1841 | 5.0 | 3165 | 0.5551 | 0.6016 | 0.7426 | 0.6192 | 0.7937 | 0.8677 | 0.7937 | 0.6844 | 0.7590 | 0.6917 |
| 0.1359 | 6.0 | 3798 | 0.5913 | 0.6349 | 0.7506 | 0.6449 | 0.7844 | 0.8676 | 0.7844 | 0.7018 | 0.7667 | 0.7057 |
| 0.1133 | 7.0 | 4431 | 0.6028 | 0.6395 | 0.7543 | 0.6475 | 0.7762 | 0.8583 | 0.7762 | 0.7012 | 0.7655 | 0.7041 |
|label | precision |recall |f1-score| support|
|:-------------:|:---------:|:-----:|:------:|:------:|
| Agriculture | 0.720 | 0.850|0.780|200|
| Buildings | 0.636 |0.777|0.700|18|
| Coastal Zone | 0.562|0.760|0.646|71|
| Cross-Cutting Area | 0.569 |0.777|0.657|180|
| Disaster Risk Management (DRM) | 0.567 |0.694|0.624|85|
| Economy-wide | 0.461 |0.635| 0.534|85|
| Education | 0.608|0.608|0.608|23|
| Energy | 0.816 |0.838|0.827|254|
| Environment | 0.561 |0.703|0.624|91|
| Health | 0.708|0.750|0.728|68|
| Industries | 0.660 |0.902|0.762|41|
| LULUCF/Forestry | 0.676|0.844|0.751|193|
| Social Development | 0.593 | 0.678|0.633|56|
| Tourism | 0.551 |0.571|0.561|28|
| Transport | 0.700|0.766|0.732|107|
| Urban | 0.414 |0.568|0.479|51|
| Waste | 0.658|0.881|0.753|59|
| Water | 0.602 |0.773|0.677|106|
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Carbon Emitted**: 0.02867 kg of CO2
- **Hours Used**: 0.706 hours
### Training Hardware
- **On Cloud**: yes
- **GPU Model**: 1 x Tesla T4
- **CPU Model**: Intel(R) Xeon(R) CPU @ 2.00GHz
- **RAM Size**: 12.67 GB
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
tomaszki/gemma-28-copy
|
tomaszki
| 2024-03-05T23:09:38Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-05T23:06:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sbaner24/vit-base-patch16-224-Trial007-YEL_STEM2
|
sbaner24
| 2024-03-05T23:06:59Z | 62 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-20T21:05:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-Trial007-YEL_STEM2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9814814814814815
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-Trial007-YEL_STEM2
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1172
- Accuracy: 0.9815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 60
- eval_batch_size: 60
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 240
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6676 | 0.89 | 2 | 0.6180 | 0.7222 |
| 0.5805 | 1.78 | 4 | 0.5004 | 0.7593 |
| 0.5012 | 2.67 | 6 | 0.3783 | 0.9630 |
| 0.2794 | 4.0 | 9 | 0.2285 | 0.9630 |
| 0.2695 | 4.89 | 11 | 0.2551 | 0.8889 |
| 0.2782 | 5.78 | 13 | 0.1079 | 0.9630 |
| 0.2131 | 6.67 | 15 | 0.1205 | 0.9630 |
| 0.1537 | 8.0 | 18 | 0.1861 | 0.9630 |
| 0.1739 | 8.89 | 20 | 0.1172 | 0.9815 |
| 0.1059 | 9.78 | 22 | 0.1092 | 0.9815 |
| 0.146 | 10.67 | 24 | 0.1072 | 0.9815 |
| 0.088 | 12.0 | 27 | 0.1015 | 0.9815 |
| 0.1304 | 12.89 | 29 | 0.1151 | 0.9815 |
| 0.0924 | 13.78 | 31 | 0.1313 | 0.9815 |
| 0.091 | 14.67 | 33 | 0.1178 | 0.9815 |
| 0.0508 | 16.0 | 36 | 0.0971 | 0.9815 |
| 0.1004 | 16.89 | 38 | 0.1175 | 0.9815 |
| 0.1097 | 17.78 | 40 | 0.1423 | 0.9630 |
| 0.0758 | 18.67 | 42 | 0.1597 | 0.9630 |
| 0.0687 | 20.0 | 45 | 0.1205 | 0.9815 |
| 0.0513 | 20.89 | 47 | 0.1107 | 0.9815 |
| 0.0755 | 21.78 | 49 | 0.1150 | 0.9815 |
| 0.0897 | 22.67 | 51 | 0.1332 | 0.9630 |
| 0.0439 | 24.0 | 54 | 0.1263 | 0.9815 |
| 0.0607 | 24.89 | 56 | 0.1111 | 0.9815 |
| 0.0719 | 25.78 | 58 | 0.1004 | 0.9815 |
| 0.0599 | 26.67 | 60 | 0.1064 | 0.9815 |
| 0.0613 | 28.0 | 63 | 0.1355 | 0.9815 |
| 0.0689 | 28.89 | 65 | 0.1444 | 0.9815 |
| 0.0754 | 29.78 | 67 | 0.1398 | 0.9815 |
| 0.0835 | 30.67 | 69 | 0.1345 | 0.9815 |
| 0.0801 | 32.0 | 72 | 0.1348 | 0.9815 |
| 0.0701 | 32.89 | 74 | 0.1365 | 0.9815 |
| 0.0647 | 33.78 | 76 | 0.1348 | 0.9815 |
| 0.0982 | 34.67 | 78 | 0.1346 | 0.9815 |
| 0.0671 | 36.0 | 81 | 0.1378 | 0.9815 |
| 0.054 | 36.89 | 83 | 0.1371 | 0.9815 |
| 0.0735 | 37.78 | 85 | 0.1355 | 0.9815 |
| 0.0736 | 38.67 | 87 | 0.1349 | 0.9815 |
| 0.0287 | 40.0 | 90 | 0.1329 | 0.9815 |
| 0.0539 | 40.89 | 92 | 0.1322 | 0.9815 |
| 0.0483 | 41.78 | 94 | 0.1324 | 0.9815 |
| 0.083 | 42.67 | 96 | 0.1319 | 0.9815 |
| 0.0558 | 44.0 | 99 | 0.1319 | 0.9815 |
| 0.0752 | 44.44 | 100 | 0.1319 | 0.9815 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 1.12.1
- Datasets 2.12.0
- Tokenizers 0.13.1
|
OwOOwO/eacc_bm2c2
|
OwOOwO
| 2024-03-05T23:05:39Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-05T18:25:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Buseak/md_mt5_0109_v6
|
Buseak
| 2024-03-05T23:05:36Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:Buseak/md_mt5_0109_v5",
"base_model:finetune:Buseak/md_mt5_0109_v5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-05T19:42:56Z |
---
license: apache-2.0
base_model: Buseak/md_mt5_0109_v5
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: md_mt5_0109_v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# md_mt5_0109_v6
This model is a fine-tuned version of [Buseak/md_mt5_0109_v5](https://huggingface.co/Buseak/md_mt5_0109_v5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0628
- Bleu: 0.6537
- Gen Len: 18.9513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 0.1911 | 1.0 | 975 | 0.0801 | 0.6356 | 18.9449 |
| 0.1854 | 2.0 | 1950 | 0.0782 | 0.6365 | 18.9446 |
| 0.1807 | 3.0 | 2925 | 0.0755 | 0.6419 | 18.9485 |
| 0.175 | 4.0 | 3900 | 0.0732 | 0.6431 | 18.949 |
| 0.1699 | 5.0 | 4875 | 0.0720 | 0.6471 | 18.949 |
| 0.1669 | 6.0 | 5850 | 0.0701 | 0.6474 | 18.9497 |
| 0.165 | 7.0 | 6825 | 0.0682 | 0.6494 | 18.95 |
| 0.1604 | 8.0 | 7800 | 0.0673 | 0.6508 | 18.9505 |
| 0.1585 | 9.0 | 8775 | 0.0665 | 0.6516 | 18.9505 |
| 0.1512 | 10.0 | 9750 | 0.0652 | 0.6518 | 18.9508 |
| 0.1543 | 11.0 | 10725 | 0.0646 | 0.653 | 18.9505 |
| 0.155 | 12.0 | 11700 | 0.0639 | 0.6533 | 18.9505 |
| 0.1506 | 13.0 | 12675 | 0.0633 | 0.6537 | 18.951 |
| 0.1493 | 14.0 | 13650 | 0.0629 | 0.6538 | 18.951 |
| 0.1486 | 15.0 | 14625 | 0.0628 | 0.6537 | 18.9513 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
gokuls/wav2vec2-base-finetuned-ic-slurp-wt_init-frz-v1
|
gokuls
| 2024-03-05T23:04:44Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-03-05T17:00:14Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ic-slurp-wt_init-frz-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ic-slurp-wt_init-frz-v1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0306
- Accuracy: 0.0502
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.9426 | 1.0 | 527 | 4.1870 | 0.0420 |
| 3.7966 | 2.0 | 1055 | 4.0306 | 0.0502 |
| 3.7149 | 3.0 | 1582 | 3.9582 | 0.0434 |
| 3.6478 | 4.0 | 2110 | 3.9343 | 0.0427 |
| 3.5037 | 5.0 | 2637 | 3.9302 | 0.0413 |
| 3.4649 | 6.0 | 3165 | 3.9289 | 0.0474 |
| 3.2427 | 7.0 | 3692 | 3.9650 | 0.0473 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.15.0
|
sparkyfina/mistral7binstruct_summarize
|
sparkyfina
| 2024-03-05T23:03:24Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T22:30:40Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: mistral7binstruct_summarize
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral7binstruct_summarize
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7815 | 0.22 | 25 | 1.5691 |
| 1.5606 | 0.43 | 50 | 1.4700 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
LarryAIDraw/LoRA_Nami
|
LarryAIDraw
| 2024-03-05T23:02:52Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-03-05T22:56:38Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/236693/lora-nami-one-piece-2-outfits
|
LarryAIDraw/nami_NOFACE_taaa0_7
|
LarryAIDraw
| 2024-03-05T23:02:07Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-03-05T22:54:49Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/142492/one-piece-series-nami
|
LarryAIDraw/haruna-09
|
LarryAIDraw
| 2024-03-05T23:01:48Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-03-05T22:54:00Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/263471/haruna-kai-ni-kancolle-or-7-outfits
|
BluetechOfficial/RMSDXL_Creative
|
BluetechOfficial
| 2024-03-05T23:00:44Z | 8 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2024-03-05T22:50:51Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/goldenpyramidsart.jpeg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
---
# RMSDXL_Creative
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/BluetechOfficial/RMSDXL_Creative/tree/main) them in the Files & versions tab.
|
farooqkhan2840503/gemma-Instruct-Finetune-simpleinput
|
farooqkhan2840503
| 2024-03-05T22:59:37Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-05T22:00:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Casper0508/Casper_falcon_7b
|
Casper0508
| 2024-03-05T22:58:04Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"base_model:adapter:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null | 2024-03-05T02:39:00Z |
---
base_model: ybelkada/falcon-7b-sharded-bf16
tags:
- generated_from_trainer
model-index:
- name: Casper_falcon_7b
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Casper_falcon_7b
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- _load_in_8bit: False
- _load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
- load_in_4bit: True
- load_in_8bit: False
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 200
### Training results
### Framework versions
- PEFT 0.4.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.13.1
- Tokenizers 0.15.2
|
jjovalle99/gemma7bit-lora-sql
|
jjovalle99
| 2024-03-05T22:57:20Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/gemma-7b",
"base_model:adapter:google/gemma-7b",
"license:other",
"region:us"
] | null | 2024-03-05T03:21:18Z |
---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-7b
datasets:
- generator
model-index:
- name: gemma7bit-lora-sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma7bit-lora-sql
This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1399
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 16.1657 | 0.06 | 20 | 13.6485 |
| 7.8281 | 0.13 | 40 | 0.7808 |
| 0.6243 | 0.19 | 60 | 0.5270 |
| 0.5179 | 0.25 | 80 | 0.4859 |
| 0.4908 | 0.32 | 100 | 0.4754 |
| 0.4752 | 0.38 | 120 | 0.4600 |
| 0.4877 | 0.45 | 140 | 0.4584 |
| 0.4626 | 0.51 | 160 | 0.4560 |
| 0.4569 | 0.57 | 180 | 0.4428 |
| 0.4504 | 0.64 | 200 | 0.4354 |
| 0.4432 | 0.7 | 220 | 0.4348 |
| 0.4395 | 0.76 | 240 | 0.4317 |
| 0.4338 | 0.83 | 260 | 0.4256 |
| 0.4308 | 0.89 | 280 | 0.4260 |
| 0.4283 | 0.95 | 300 | 0.4210 |
| 0.4146 | 1.02 | 320 | 0.4225 |
| 0.3848 | 1.08 | 340 | 0.4186 |
| 0.3812 | 1.14 | 360 | 0.4185 |
| 0.38 | 1.21 | 380 | 0.4200 |
| 0.3795 | 1.27 | 400 | 0.4171 |
| 0.3766 | 1.34 | 420 | 0.4174 |
| 0.3772 | 1.4 | 440 | 0.4136 |
| 0.3777 | 1.46 | 460 | 0.4148 |
| 0.379 | 1.53 | 480 | 0.4155 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
|
anilerkul/crossing-outcome-random-splitting-model
|
anilerkul
| 2024-03-05T22:50:36Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-05T22:50:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LarryAIDraw/haruna_kantaicollection
|
LarryAIDraw
| 2024-03-05T22:48:24Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-02T17:50:48Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/131252/haruna-kantai-collection
|
chosenone80/arabert-ner-aner-test-1
|
chosenone80
| 2024-03-05T22:44:14Z | 45 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-03-05T22:28:20Z |
---
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_keras_callback
model-index:
- name: chosenone80/arabert-ner-aner-test-1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# chosenone80/arabert-ner-aner-test-1
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0208
- Validation Loss: 0.1766
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2485, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0208 | 0.1766 | 0 |
### Framework versions
- Transformers 4.38.1
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
gokuls/wav2vec2-base-finetuned-ic-slurp-wt_init
|
gokuls
| 2024-03-05T22:40:09Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-03-05T14:54:16Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ic-slurp-wt_init
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ic-slurp-wt_init
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8597
- Accuracy: 0.0627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.7961 | 1.0 | 527 | 3.9199 | 0.0400 |
| 3.797 | 2.0 | 1055 | 3.9294 | 0.0520 |
| 3.9174 | 3.0 | 1582 | 3.8597 | 0.0627 |
| 3.9264 | 4.0 | 2110 | 3.8551 | 0.0627 |
| 3.8772 | 5.0 | 2637 | 3.8744 | 0.0627 |
| 3.9218 | 6.0 | 3165 | 3.8676 | 0.0627 |
| 3.8898 | 7.0 | 3692 | 3.8515 | 0.0627 |
| 3.9045 | 8.0 | 4220 | 3.8544 | 0.0627 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
titan0115/MITIS
|
titan0115
| 2024-03-05T22:39:06Z | 0 | 0 | null |
[
"art",
"anime",
"en",
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2024-03-02T21:04:21Z |
---
license: cc-by-nc-nd-4.0
language:
- en
tags:
- art
- anime
---
# Model Card for Model ID
Must draw art, anime
### Model Description
good day to all, I present you my experiment, this is my first attempt to make my own model without using / denying the idea of merge
- **Developed by:** titan0115
- **Funded by:** motivation
- **Model type:** CHECKPOINT
- **Language(s) (NLP):** English
- **License:** cc-by-nc-nd-4.0
- **Finetuned from model:** absent
|
sweetfelinity/q-Taxi-v3
|
sweetfelinity
| 2024-03-05T22:34:12Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-05T22:34:10Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="sweetfelinity/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AlexandreManai/Taxi-v3
|
AlexandreManai
| 2024-03-05T22:32:07Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-05T22:32:06Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="AlexandreManai/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AlexandreManai/q-FrozenLake-v1-4x4-noSlippery
|
AlexandreManai
| 2024-03-05T22:28:34Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-05T22:28:31Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="AlexandreManai/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
akaistormherald/ToxicMist-v0.2-7B-DPO-gguf
|
akaistormherald
| 2024-03-05T22:28:14Z | 7 | 1 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"dataset:unalignment/toxic-dpo-v0.2",
"base_model:unsloth/zephyr-sft-bnb-4bit",
"base_model:quantized:unsloth/zephyr-sft-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-05T22:05:08Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: unsloth/zephyr-sft-bnb-4bit
datasets:
- unalignment/toxic-dpo-v0.2
---
# Uploaded model
- **Developed by:** akaistormherald
- **License:** apache-2.0
- **Finetuned from model :** unsloth/zephyr-sft-bnb-4bit
Mistral7b + SFT + 4bit DPO training with unalignment/toxic-dpo-v0.2 == ToxicMist? ☣🌫
(GGUF)
|
jucamohedano/Phi1.5-openhermes-preferences-metamath
|
jucamohedano
| 2024-03-05T22:18:17Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"license:mit",
"region:us"
] | null | 2024-03-04T22:25:06Z |
---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/phi-1_5
datasets:
- generator
model-index:
- name: Phi1.5-openhermes-preferences-metamath
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Phi1.5-openhermes-preferences-metamath
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
akaistormherald/ToxicMist-v0.2-7B-DPO
|
akaistormherald
| 2024-03-05T22:17:52Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"dpo",
"conversational",
"en",
"dataset:unalignment/toxic-dpo-v0.2",
"base_model:unsloth/zephyr-sft-bnb-4bit",
"base_model:finetune:unsloth/zephyr-sft-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-05T20:40:33Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- dpo
base_model: unsloth/zephyr-sft-bnb-4bit
datasets:
- unalignment/toxic-dpo-v0.2
---
# Uploaded model
- **Developed by:** akaistormherald
- **License:** apache-2.0
- **Finetuned from model :** unsloth/zephyr-sft-bnb-4bit
Mistral7b + SFT + 4bit DPO training with unalignment/toxic-dpo-v0.2 == ToxicMist? ☣🌫
|
Abraham007China/q-Taxi-v3
|
Abraham007China
| 2024-03-05T22:10:30Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-05T22:09:04Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Abraham007China/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Yuan274/whale-image-generator
|
Yuan274
| 2024-03-05T22:04:00Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-05T21:59:41Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### whale-image-generator Dreambooth model trained by Yuan274 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Weni/ZeroShot-3.3.26-Mistral-7b-Multilanguage-3.2.0
|
Weni
| 2024-03-05T22:00:39Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:25:19Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: ZeroShot-3.3.26-Mistral-7b-Multilanguage-3.2.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ZeroShot-3.3.26-Mistral-7b-Multilanguage-3.2.0
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1164 | 0.06 | 100 | 0.1037 |
| 0.1355 | 0.12 | 200 | 0.1051 |
| 0.1015 | 0.19 | 300 | 0.1142 |
| 0.1026 | 0.25 | 400 | 0.0992 |
| 0.1002 | 0.31 | 500 | 0.1083 |
| 0.0879 | 0.37 | 600 | 0.0894 |
| 0.0778 | 0.43 | 700 | 0.0907 |
| 0.0836 | 0.5 | 800 | 0.0747 |
| 0.0642 | 0.56 | 900 | 0.0645 |
| 0.0496 | 0.62 | 1000 | 0.0709 |
| 0.06 | 0.68 | 1100 | 0.0603 |
| 0.0614 | 0.74 | 1200 | 0.0567 |
| 0.0538 | 0.81 | 1300 | 0.0478 |
| 0.0524 | 0.87 | 1400 | 0.0449 |
| 0.0323 | 0.93 | 1500 | 0.0439 |
| 0.0498 | 0.99 | 1600 | 0.0434 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
kaustavbhattacharjee/finetuning-DistillBERT-imdb
|
kaustavbhattacharjee
| 2024-03-05T21:59:22Z | 12 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-05T21:31:58Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-DistillBERT-3000-samples
results: []
datasets:
- imdb
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-DistillBERT-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [IMDB](https://huggingface.co/datasets/imdb) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3396
- Accuracy: 0.87
- F1: 0.8730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ogdanneedham/mistral-ls-0.1
|
ogdanneedham
| 2024-03-05T21:53:53Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-05T21:45:56Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tali1/autotrain-suricata-facebookai-roberta-large
|
tali1
| 2024-03-05T21:48:21Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"autotrain",
"dataset:autotrain-suricata-facebookai-roberta-large/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-05T21:47:40Z |
---
tags:
- autotrain
- text-classification
widget:
- text: "I love AutoTrain"
datasets:
- autotrain-suricata-facebookai-roberta-large/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 2.317350149154663
f1_macro: 0.02437641723356009
f1_micro: 0.20574162679425836
f1_weighted: 0.07021341231867546
precision_macro: 0.014695830485304168
precision_micro: 0.20574162679425836
precision_weighted: 0.042329616995947894
recall_macro: 0.07142857142857142
recall_micro: 0.20574162679425836
recall_weighted: 0.20574162679425836
accuracy: 0.20574162679425836
|
mithegooie/code-search-net-tokenizer
|
mithegooie
| 2024-03-05T21:31:36Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-05T21:31:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Intel/demucs-openvino
|
Intel
| 2024-03-05T21:30:51Z | 0 | 2 | null |
[
"license:mit",
"region:us"
] | null | 2024-03-05T21:18:28Z |
---
license: mit
---
# Demucs OpenVINO
This repo stores OpenVINO(TM) models in IR format that are used to perform Music Separation.
Currently, the models stored here (htdemucs_V4.xml, htdemucs_v4.bin) is a conversion of the Demucs v4 model, with some 'outer' operations (such as stft, istft) stripped out.
This is intended to be used with the set of OpenVINO-based AI plugins for Audacity(R), here: https://github.com/intel/openvino-plugins-ai-audacity
More specifically, see details of pure-C++ implementation of the htdemucs pipeline here: https://github.com/intel/openvino-plugins-ai-audacity/blob/main/mod-openvino/htdemucs.cpp
This pipeline was ported from htdemucs.py, found here: https://github.com/facebookresearch/demucs
# Citations:
```
@inproceedings{rouard2022hybrid,
title={Hybrid Transformers for Music Source Separation},
author={Rouard, Simon and Massa, Francisco and D{\'e}fossez, Alexandre},
booktitle={ICASSP 23},
year={2023}
}
@inproceedings{defossez2021hybrid,
title={Hybrid Spectrogram and Waveform Source Separation},
author={D{\'e}fossez, Alexandre},
booktitle={Proceedings of the ISMIR 2021 Workshop on Music Source Separation},
year={2021}
}
```
## Intel’s Human Rights Disclaimer:
Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel's Global Human Rights Principles. Intel's products and software are intended only to be used in applications that do not cause or contribute to a violation of an internationally recognized human right.
|
Frase/tiny-bert-model-unsafe
|
Frase
| 2024-03-05T21:28:40Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"BERT",
"MNLI",
"NLI",
"transformer",
"pre-training",
"en",
"arxiv:1908.08962",
"arxiv:2110.01518",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-03-05T21:28:39Z |
---
language:
- en
license:
- mit
tags:
- BERT
- MNLI
- NLI
- transformer
- pre-training
---
*DISCLAIMER*: This repo demonstrates a picklebomb payload in pytorch that may go undetected by superficial scanning.
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert).
This is one of the smaller pre-trained BERT variants, together with [bert-mini](https://huggingface.co/prajjwal1/bert-mini) [bert-small](https://huggingface.co/prajjwal1/bert-small) and [bert-medium](https://huggingface.co/prajjwal1/bert-medium). They were introduced in the study `Well-Read Students Learn Better: On the Importance of Pre-training Compact Models` ([arxiv](https://arxiv.org/abs/1908.08962)), and ported to HF for the study `Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics` ([arXiv](https://arxiv.org/abs/2110.01518)). These models are supposed to be trained on a downstream task.
If you use the model, please consider citing both the papers:
```
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{DBLP:journals/corr/abs-1908-08962,
author = {Iulia Turc and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {Well-Read Students Learn Better: The Impact of Student Initialization
on Knowledge Distillation},
journal = {CoRR},
volume = {abs/1908.08962},
year = {2019},
url = {http://arxiv.org/abs/1908.08962},
eprinttype = {arXiv},
eprint = {1908.08962},
timestamp = {Thu, 29 Aug 2019 16:32:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Config of this model:
- `prajjwal1/bert-tiny` (L=2, H=128) [Model Link](https://huggingface.co/prajjwal1/bert-tiny)
Other models to check out:
- `prajjwal1/bert-mini` (L=4, H=256) [Model Link](https://huggingface.co/prajjwal1/bert-mini)
- `prajjwal1/bert-small` (L=4, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-small)
- `prajjwal1/bert-medium` (L=8, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-medium)
Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli).
Twitter: [@prajjwal_1](https://twitter.com/prajjwal_1)
|
ccourc23/fine-tuned-Whisper-Tiny-en-US
|
ccourc23
| 2024-03-05T21:27:57Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-25T12:30:32Z |
---
language:
- en
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: fine-tuned-Whisper-Tiny-en-US
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: minds14 - en(US)
type: PolyAI/minds14
config: en-US
split: train
args: 'config: en-US, split: test'
metrics:
- name: Wer
type: wer
value: 0.3247210804462713
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-Whisper-Tiny-en-US
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the minds14 - en(US) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7793
- Wer Ortho: 0.3222
- Wer: 0.3247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 400
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:------:|
| 0.0014 | 17.24 | 500 | 0.5901 | 0.3210 | 0.3188 |
| 0.0003 | 34.48 | 1000 | 0.6579 | 0.3124 | 0.3142 |
| 0.0002 | 51.72 | 1500 | 0.6892 | 0.3143 | 0.3165 |
| 0.0001 | 68.97 | 2000 | 0.7129 | 0.3167 | 0.3194 |
| 0.0001 | 86.21 | 2500 | 0.7330 | 0.3179 | 0.3206 |
| 0.0 | 103.45 | 3000 | 0.7511 | 0.3191 | 0.3218 |
| 0.0 | 120.69 | 3500 | 0.7653 | 0.3179 | 0.3206 |
| 0.0 | 137.93 | 4000 | 0.7793 | 0.3222 | 0.3247 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.