modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-26 18:27:43
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-26 18:26:40
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Jean-Baptiste/camembert-ner-with-dates
|
Jean-Baptiste
| 2023-06-16T01:31:43Z | 157,634 | 40 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"safetensors",
"camembert",
"token-classification",
"fr",
"dataset:Jean-Baptiste/wikiner_fr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
language: fr
datasets:
- Jean-Baptiste/wikiner_fr
widget:
- text: "Je m'appelle jean-baptiste et j'habite à montréal depuis fevr 2012"
license: mit
---
# camembert-ner: model fine-tuned from camemBERT for NER task (including DATE tag).
## Introduction
[camembert-ner-with-dates] is an extension of french camembert-ner model with an additionnal tag for dates.
Model was trained on enriched version of wikiner-fr dataset (~170 634 sentences).
On my test data (mix of chat and email), this model got an f1 score of ~83% (in comparison dateparser was ~70%).
Dateparser library can still be be used on the output of this model in order to convert text to python datetime object
(https://dateparser.readthedocs.io/en/latest/).
## How to use camembert-ner-with-dates with HuggingFace
##### Load camembert-ner-with-dates and its sub-word tokenizer :
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/camembert-ner-with-dates")
model = AutoModelForTokenClassification.from_pretrained("Jean-Baptiste/camembert-ner-with-dates")
##### Process text sample (from wikipedia)
from transformers import pipeline
nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
nlp("Apple est créée le 1er avril 1976 dans le garage de la maison d'enfance de Steve Jobs à Los Altos en Californie par Steve Jobs, Steve Wozniak et Ronald Wayne14, puis constituée sous forme de société le 3 janvier 1977 à l'origine sous le nom d'Apple Computer, mais pour ses 30 ans et pour refléter la diversification de ses produits, le mot « computer » est retiré le 9 janvier 2015.")
[{'entity_group': 'ORG',
'score': 0.9776379466056824,
'word': 'Apple',
'start': 0,
'end': 5},
{'entity_group': 'DATE',
'score': 0.9793774570737567,
'word': 'le 1er avril 1976 dans le',
'start': 15,
'end': 41},
{'entity_group': 'PER',
'score': 0.9958226680755615,
'word': 'Steve Jobs',
'start': 74,
'end': 85},
{'entity_group': 'LOC',
'score': 0.995087186495463,
'word': 'Los Altos',
'start': 87,
'end': 97},
{'entity_group': 'LOC',
'score': 0.9953305125236511,
'word': 'Californie',
'start': 100,
'end': 111},
{'entity_group': 'PER',
'score': 0.9961076378822327,
'word': 'Steve Jobs',
'start': 115,
'end': 126},
{'entity_group': 'PER',
'score': 0.9960325956344604,
'word': 'Steve Wozniak',
'start': 127,
'end': 141},
{'entity_group': 'PER',
'score': 0.9957776467005411,
'word': 'Ronald Wayne',
'start': 144,
'end': 157},
{'entity_group': 'DATE',
'score': 0.994030773639679,
'word': 'le 3 janvier 1977 à',
'start': 198,
'end': 218},
{'entity_group': 'ORG',
'score': 0.9720810294151306,
'word': "d'Apple Computer",
'start': 240,
'end': 257},
{'entity_group': 'DATE',
'score': 0.9924157659212748,
'word': '30 ans et',
'start': 272,
'end': 282},
{'entity_group': 'DATE',
'score': 0.9934852868318558,
'word': 'le 9 janvier 2015.',
'start': 363,
'end': 382}]
```
## Model performances (metric: seqeval)
Global
```
'precision': 0.928
'recall': 0.928
'f1': 0.928
```
By entity
```
Label LOC: (precision:0.929, recall:0.932, f1:0.931, support:9510)
Label PER: (precision:0.952, recall:0.965, f1:0.959, support:9399)
Label MISC: (precision:0.878, recall:0.844, f1:0.860, support:5364)
Label ORG: (precision:0.848, recall:0.883, f1:0.865, support:2299)
Label DATE: Not relevant because of method used to add date tag on wikiner dataset (estimated f1 ~90%)
```
|
thewalnutaisg/camembert-ner-with-dates
|
thewalnutaisg
| 2023-06-16T01:31:43Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"safetensors",
"camembert",
"token-classification",
"fr",
"dataset:Jean-Baptiste/wikiner_fr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-11-24T02:04:42Z |
---
language: fr
datasets:
- Jean-Baptiste/wikiner_fr
widget:
- text: "Je m'appelle jean-baptiste et j'habite à montréal depuis fevr 2012"
license: mit
---
# camembert-ner: model fine-tuned from camemBERT for NER task (including DATE tag).
## Introduction
[camembert-ner-with-dates] is an extension of french camembert-ner model with an additionnal tag for dates.
Model was trained on enriched version of wikiner-fr dataset (~170 634 sentences).
On my test data (mix of chat and email), this model got an f1 score of ~83% (in comparison dateparser was ~70%).
Dateparser library can still be be used on the output of this model in order to convert text to python datetime object
(https://dateparser.readthedocs.io/en/latest/).
## How to use camembert-ner-with-dates with HuggingFace
##### Load camembert-ner-with-dates and its sub-word tokenizer :
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/camembert-ner-with-dates")
model = AutoModelForTokenClassification.from_pretrained("Jean-Baptiste/camembert-ner-with-dates")
##### Process text sample (from wikipedia)
from transformers import pipeline
nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
nlp("Apple est créée le 1er avril 1976 dans le garage de la maison d'enfance de Steve Jobs à Los Altos en Californie par Steve Jobs, Steve Wozniak et Ronald Wayne14, puis constituée sous forme de société le 3 janvier 1977 à l'origine sous le nom d'Apple Computer, mais pour ses 30 ans et pour refléter la diversification de ses produits, le mot « computer » est retiré le 9 janvier 2015.")
[{'entity_group': 'ORG',
'score': 0.9776379466056824,
'word': 'Apple',
'start': 0,
'end': 5},
{'entity_group': 'DATE',
'score': 0.9793774570737567,
'word': 'le 1er avril 1976 dans le',
'start': 15,
'end': 41},
{'entity_group': 'PER',
'score': 0.9958226680755615,
'word': 'Steve Jobs',
'start': 74,
'end': 85},
{'entity_group': 'LOC',
'score': 0.995087186495463,
'word': 'Los Altos',
'start': 87,
'end': 97},
{'entity_group': 'LOC',
'score': 0.9953305125236511,
'word': 'Californie',
'start': 100,
'end': 111},
{'entity_group': 'PER',
'score': 0.9961076378822327,
'word': 'Steve Jobs',
'start': 115,
'end': 126},
{'entity_group': 'PER',
'score': 0.9960325956344604,
'word': 'Steve Wozniak',
'start': 127,
'end': 141},
{'entity_group': 'PER',
'score': 0.9957776467005411,
'word': 'Ronald Wayne',
'start': 144,
'end': 157},
{'entity_group': 'DATE',
'score': 0.994030773639679,
'word': 'le 3 janvier 1977 à',
'start': 198,
'end': 218},
{'entity_group': 'ORG',
'score': 0.9720810294151306,
'word': "d'Apple Computer",
'start': 240,
'end': 257},
{'entity_group': 'DATE',
'score': 0.9924157659212748,
'word': '30 ans et',
'start': 272,
'end': 282},
{'entity_group': 'DATE',
'score': 0.9934852868318558,
'word': 'le 9 janvier 2015.',
'start': 363,
'end': 382}]
```
## Model performances (metric: seqeval)
Global
```
'precision': 0.928
'recall': 0.928
'f1': 0.928
```
By entity
```
Label LOC: (precision:0.929, recall:0.932, f1:0.931, support:9510)
Label PER: (precision:0.952, recall:0.965, f1:0.959, support:9399)
Label MISC: (precision:0.878, recall:0.844, f1:0.860, support:5364)
Label ORG: (precision:0.848, recall:0.883, f1:0.865, support:2299)
Label DATE: Not relevant because of method used to add date tag on wikiner dataset (estimated f1 ~90%)
```
|
BChevva/bch-finetuning-sentiment-model-3000-samples
|
BChevva
| 2023-06-16T01:26:30Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T15:13:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: bch-finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9
- name: F1
type: f1
value: 0.9032258064516129
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bch-finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6781
- Accuracy: 0.9
- F1: 0.9032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gokuls/hBERTv1_no_pretrain_mnli
|
gokuls
| 2023-06-16T01:23:35Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T11:32:12Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_no_pretrain_mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
config: mnli
split: validation_matched
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.3522172497965826
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_no_pretrain_mnli
This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0986
- Accuracy: 0.3522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1037 | 1.0 | 4091 | 1.0994 | 0.3182 |
| 1.0988 | 2.0 | 8182 | 1.0986 | 0.3182 |
| 1.0987 | 3.0 | 12273 | 1.0989 | 0.3274 |
| 1.0987 | 4.0 | 16364 | 1.0986 | 0.3545 |
| 1.0987 | 5.0 | 20455 | 1.0986 | 0.3545 |
| 1.0987 | 6.0 | 24546 | 1.0986 | 0.3274 |
| 1.0986 | 7.0 | 28637 | 1.0986 | 0.3182 |
| 1.0986 | 8.0 | 32728 | 1.0986 | 0.3274 |
| 1.0986 | 9.0 | 36819 | 1.0986 | 0.3274 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
khalilou/rare-puppers
|
khalilou
| 2023-06-16T01:14:08Z | 238 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-16T01:14:02Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8928571343421936
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### ball

#### car

#### person

#### stadium

#### world cup

|
hitachi-nlp/bert-base-japanese_nothing-wordpiece
|
hitachi-nlp
| 2023-06-16T01:07:33Z | 182 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-14T08:08:06Z |
---
license: cc-by-nc-sa-4.0
datasets:
- wikipedia
- cc100
language:
- ja
library_name: transformers
pipeline_tag: fill-mask
---
Japanese BERT-base (Nothing + WordPiece)
===
## How to load the tokenizer
Please download the dictionary file for Nothing + WordPiece from [our GitHub repository](https://github.com/hitachi-nlp/compare-ja-tokenizer/blob/public/data/dict/nothing_wordpiece.json).
Then you can load the tokenizer by specifying the path of the dictionary file to `dict_path`.
```python
from typing import Optional
from tokenizers import Tokenizer, NormalizedString, PreTokenizedString
from tokenizers.processors import BertProcessing
from tokenizers.pre_tokenizers import PreTokenizer
from transformers import PreTrainedTokenizerFast
# load a tokenizer
dict_path = /path/to/nothing_wordpiece.json
tokenizer = Tokenizer.from_file(dict_path)
tokenizer.post_processor = BertProcessing(
cls=("[CLS]", tokenizer.token_to_id('[CLS]')),
sep=("[SEP]", tokenizer.token_to_id('[SEP]'))
)
# convert to PreTrainedTokenizerFast
tokenizer = PreTrainedTokenizerFast(
tokenizer_object=tokenizer,
unk_token='[UNK]',
cls_token='[CLS]',
sep_token='[SEP]',
pad_token='[PAD]',
mask_token='[MASK]'
)
```
```python
# Test
test_str = "こんにちは。私は形態素解析器について研究をしています。"
tokenizer.convert_ids_to_tokens(tokenizer(test_str).input_ids)
# -> ['[CLS]','こ','##ん','##に','##ち','##は','##。','##私','##は','##形','##態','##素','##解','##析','##器','##に','##つ','##い','##て','##研','##究','##を','##し','##て','##い','##ま','##す','##。','[SEP]']
```
## How to load the model
```python
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained("hitachi-nlp/bert-base_nothing-wordpiece")
```
**See [our repository](https://github.com/hitachi-nlp/compare-ja-tokenizer) for more details!**
|
hitachi-nlp/bert-base-japanese_nothing-unigram
|
hitachi-nlp
| 2023-06-16T01:07:11Z | 195 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-14T08:07:28Z |
---
license: cc-by-nc-sa-4.0
datasets:
- wikipedia
- cc100
language:
- ja
library_name: transformers
pipeline_tag: fill-mask
---
Japanese BERT-base (Nothing + Unigram)
===
## How to load the tokenizer
Please download the dictionary file for Nothing + Unigram from [our GitHub repository](https://github.com/hitachi-nlp/compare-ja-tokenizer/blob/public/data/dict/nothing_unigram.json).
Then you can load the tokenizer by specifying the path of the dictionary file to `dict_path`.
```python
from typing import Optional
from tokenizers import Tokenizer, NormalizedString, PreTokenizedString
from tokenizers.processors import BertProcessing
from tokenizers.pre_tokenizers import PreTokenizer
from transformers import PreTrainedTokenizerFast
# load a tokenizer
dict_path = /path/to/nothing_unigram.json
tokenizer = Tokenizer.from_file(dict_path)
tokenizer.post_processor = BertProcessing(
cls=("[CLS]", tokenizer.token_to_id('[CLS]')),
sep=("[SEP]", tokenizer.token_to_id('[SEP]'))
)
# convert to PreTrainedTokenizerFast
tokenizer = PreTrainedTokenizerFast(
tokenizer_object=tokenizer,
unk_token='[UNK]',
cls_token='[CLS]',
sep_token='[SEP]',
pad_token='[PAD]',
mask_token='[MASK]'
)
```
```python
# Test
test_str = "こんにちは。私は形態素解析器について研究をしています。"
tokenizer.convert_ids_to_tokens(tokenizer(test_str).input_ids)
# -> ['[CLS]','こん','に','ち','は','。','私','は','形態','素','解析','器','について','研究','をして','います','。','[SEP]']
```
## How to load the model
```python
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained("hitachi-nlp/bert-base_nothing-unigram")
```
**See [our repository](https://github.com/hitachi-nlp/compare-ja-tokenizer) for more details!**
|
hitachi-nlp/bert-base-japanese_nothing-bpe
|
hitachi-nlp
| 2023-06-16T01:06:43Z | 198 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-14T08:06:50Z |
---
license: cc-by-nc-sa-4.0
datasets:
- wikipedia
- cc100
language:
- ja
library_name: transformers
pipeline_tag: fill-mask
---
Japanese BERT-base (Nothing + BPE)
===
## How to load the tokenizer
Please download the dictionary file for Nothing + BPE from [our GitHub repository](https://github.com/hitachi-nlp/compare-ja-tokenizer/blob/public/data/dict/nothing_bpe.json).
Then you can load the tokenizer by specifying the path of the dictionary file to `dict_path`.
```python
from typing import Optional
from tokenizers import Tokenizer, NormalizedString, PreTokenizedString
from tokenizers.processors import BertProcessing
from tokenizers.pre_tokenizers import PreTokenizer
from transformers import PreTrainedTokenizerFast
# load a tokenizer
dict_path = /path/to/nothing_bpe.json
tokenizer = Tokenizer.from_file(dict_path)
tokenizer.post_processor = BertProcessing(
cls=("[CLS]", tokenizer.token_to_id('[CLS]')),
sep=("[SEP]", tokenizer.token_to_id('[SEP]'))
)
# convert to PreTrainedTokenizerFast
tokenizer = PreTrainedTokenizerFast(
tokenizer_object=tokenizer,
unk_token='[UNK]',
cls_token='[CLS]',
sep_token='[SEP]',
pad_token='[PAD]',
mask_token='[MASK]'
)
```
```python
# Test
test_str = "こんにちは。私は形態素解析器について研究をしています。"
tokenizer.convert_ids_to_tokens(tokenizer(test_str).input_ids)
# -> ['[CLS]','こん','に','ち','は','。','私は','形態','素','解析','器','について','研究','を','してい','ます','。','[SEP]']
```
## How to load the model
```python
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained("hitachi-nlp/bert-base_nothing-bpe")
```
**See [our repository](https://github.com/hitachi-nlp/compare-ja-tokenizer) for more details!**
|
hitachi-nlp/bert-base-japanese_vaporetto-wordpiece
|
hitachi-nlp
| 2023-06-16T01:06:17Z | 193 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-14T07:19:46Z |
---
license: cc-by-nc-sa-4.0
datasets:
- wikipedia
- cc100
language:
- ja
library_name: transformers
pipeline_tag: fill-mask
---
Japanese BERT-base (Vaporetto + WordPiece)
===
## How to load the tokenizer
Please download the dictionary file for Vaporetto + WordPiece from [our GitHub repository](https://github.com/hitachi-nlp/compare-ja-tokenizer/blob/public/data/dict/vaporetto_wordpiece.json).
Then you can load the tokenizer by specifying the path of the dictionary file to `dict_path`.
```python
from typing import Optional
from tokenizers import Tokenizer, NormalizedString, PreTokenizedString
from tokenizers.processors import BertProcessing
from tokenizers.pre_tokenizers import PreTokenizer
from transformers import PreTrainedTokenizerFast
import vaporetto
import textspan
class VaporettoPreTokenizer:
def __init__(self, unidic_path: str):
with open(unidic_path, 'rb') as fp:
model = fp.read()
self.tokenizer = vaporetto.Vaporetto(model, predict_tags=False)
def tokenize(self, sequence: str) -> list[str]:
tokens = self.tokenizer.tokenize(sequence)
return [token.surface() for token in tokens]
def custom_split(self, i: int, normalized_string: NormalizedString) -> list[NormalizedString]:
text = str(normalized_string)
tokens = self.tokenize(text)
tokens_spans = textspan.get_original_spans(tokens, text)
return [normalized_string[st:ed] for cahr_spans in tokens_spans for st,ed in cahr_spans]
def pre_tokenize(self, pretok: PreTokenizedString):
pretok.split(self.custom_split)
# load a pre-tokenizer
pre_tokenizer = VaporettoPreTokenizer("/path/to/bccwj-suw+unidic+tag.model.zst")
# load a tokenizer
dict_path = /path/to/vaporetto_wordpiece.json
tokenizer = Tokenizer.from_file(dict_path)
tokenizer.post_processor = BertProcessing(
cls=("[CLS]", tokenizer.token_to_id('[CLS]')),
sep=("[SEP]", tokenizer.token_to_id('[SEP]'))
)
# convert to PreTrainedTokenizerFast
tokenizer = PreTrainedTokenizerFast(
tokenizer_object=tokenizer,
unk_token='[UNK]',
cls_token='[CLS]',
sep_token='[SEP]',
pad_token='[PAD]',
mask_token='[MASK]'
)
# set a pre-tokenizer
tokenizer._tokenizer.pre_tokenizer = PreTokenizer.custom(pre_tokenizer)
```
```python
# Test
test_str = "こんにちは。私は形態素解析器について研究をしています。"
tokenizer.convert_ids_to_tokens(tokenizer(test_str).input_ids)
# -> ['[CLS]','こ','##ん','##に','##ち','##は','。','私','は','形態','素','解析','器','に','つい','て','研究','を','し','て','い','ます','。','[SEP]']
```
## How to load the model
```python
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained("hitachi-nlp/bert-base_vaporetto-wordpiece")
```
**See [our repository](https://github.com/hitachi-nlp/compare-ja-tokenizer) for more details!**
|
hitachi-nlp/bert-base-japanese_sudachi-unigram
|
hitachi-nlp
| 2023-06-16T01:03:54Z | 177 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-14T07:16:29Z |
---
license: cc-by-nc-sa-4.0
datasets:
- wikipedia
- cc100
language:
- ja
library_name: transformers
pipeline_tag: fill-mask
---
Japanese BERT-base (Sudachi + Unigram)
===
## How to load the tokenizer
Please download the dictionary file for Sudachi + Unigram from [our GitHub repository](https://github.com/hitachi-nlp/compare-ja-tokenizer/blob/public/data/dict/sudachi_unigram.json).
Then you can load the tokenizer by specifying the path of the dictionary file to `dict_path`.
```python
from typing import Optional
from tokenizers import Tokenizer, NormalizedString, PreTokenizedString
from tokenizers.processors import BertProcessing
from tokenizers.pre_tokenizers import PreTokenizer
from transformers import PreTrainedTokenizerFast
from sudachipy import tokenizer
from sudachipy import dictionary
import textspan
class SudachiPreTokenizer:
def __init__(self, mecab_dict_path: Optional[str] = None):
self.sudachi = dictionary.Dictionary().create()
def tokenize(self, sequence: str) -> list[str]:
return [token.surface() for token in self.sudachi.tokenize(sequence)]
def custom_split(self, i: int, normalized_string: NormalizedString) -> list[NormalizedString]:
text = str(normalized_string)
tokens = self.tokenize(text)
tokens_spans = textspan.get_original_spans(tokens, text)
return [normalized_string[st:ed] for cahr_spans in tokens_spans for st,ed in cahr_spans]
def pre_tokenize(self, pretok: PreTokenizedString):
pretok.split(self.custom_split)
# load a pre-tokenizer
pre_tokenizer = SudachiPreTokenizer()
# load a tokenizer
dict_path = /path/to/sudachi_unigram.json
tokenizer = Tokenizer.from_file(dict_path)
tokenizer.post_processor = BertProcessing(
cls=("[CLS]", tokenizer.token_to_id('[CLS]')),
sep=("[SEP]", tokenizer.token_to_id('[SEP]'))
)
# convert to PreTrainedTokenizerFast
tokenizer = PreTrainedTokenizerFast(
tokenizer_object=tokenizer,
unk_token='[UNK]',
cls_token='[CLS]',
sep_token='[SEP]',
pad_token='[PAD]',
mask_token='[MASK]'
)
# set a pre-tokenizer
tokenizer._tokenizer.pre_tokenizer = PreTokenizer.custom(pre_tokenizer)
```
```python
# Test
test_str = "こんにちは。私は形態素解析器について研究をしています。"
tokenizer.convert_ids_to_tokens(tokenizer(test_str).input_ids)
# -> ['[CLS]','こんにち','は','。','私','は','形態','素','解','析','器','に','つい','て','研究','を','し','て','い','ま','す','。','[SEP]']
```
## How to load the model
```python
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained("hitachi-nlp/bert-base_sudachi-unigram")
```
**See [our repository](https://github.com/hitachi-nlp/compare-ja-tokenizer) for more details!**
|
openlm-research/open_llama_7b
|
openlm-research
| 2023-06-16T00:45:23Z | 45,268 | 126 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:togethercomputer/RedPajama-Data-1T",
"arxiv:2302.13971",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T08:54:38Z |
---
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T
---
# OpenLLaMA: An Open Reproduction of LLaMA
In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a 7B and 3B model trained on 1T tokens, as well as the preview of a 13B model trained on 600B tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details.
## Weights Release, License and Usage
We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
### Loading the Weights with Hugging Face Transformers
Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that the auto-converted fast tokenizer sometimes gives incorrect tokenizations.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage.
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
model_path = 'openlm-research/open_llama_3b'
# model_path = 'openlm-research/open_llama_7b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
prompt = 'Q: What is the largest animal?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))
```
For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama).
### Evaluating with LM-Eval-Harness
The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below:
```python
tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained(
pretrained if tokenizer is None else tokenizer,
revision=revision + ("/" + subfolder if subfolder is not None else ""),
use_fast=False
)
```
### Loading the Weights with EasyLM
For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights. Note that we use BOS (beginning of sentence) token (id=1) during training, so it is best to prepend this token for best performance during few-shot evaluation.
## Dataset and Training
We train our models on the [RedPajama](https://www.together.xyz/blog/redpajama) dataset released by [Together](https://www.together.xyz/), which is a reproduction of the LLaMA training dataset containing over 1.2 trillion tokens. We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs the RedPajama dataset rather than the one utilized by the original LLaMA.
We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and [fully sharded data parallelism (also know as ZeRO stage 3)](https://engineering.fb.com/2021/07/15/open-source/fsdp/) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.
## Evaluation
We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).
The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
| **Task/Metric** | GPT-J 6B | LLaMA 7B | OpenLLaMA 7B | OpenLLaMA 3B | OpenLLaMA 13B 600BT |
| ---------------------- | -------- | -------- | ------------ | ------------ | ------------------- |
| anli_r1/acc | 0.32 | 0.35 | 0.33 | 0.33 | 0.33 |
| anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.32 | 0.35 |
| anli_r3/acc | 0.35 | 0.37 | 0.38 | 0.35 | 0.38 |
| arc_challenge/acc | 0.34 | 0.39 | 0.37 | 0.34 | 0.39 |
| arc_challenge/acc_norm | 0.37 | 0.41 | 0.38 | 0.37 | 0.42 |
| arc_easy/acc | 0.67 | 0.68 | 0.72 | 0.69 | 0.74 |
| arc_easy/acc_norm | 0.62 | 0.52 | 0.68 | 0.65 | 0.70 |
| ddboolq/acc | 0.50 | 0.56 | 0.53 | 0.49 | 0.71 |
| hellaswag/acc | 0.36 | 0.36 | 0.63 | 0.43 | 0.54 |
| hellaswag/acc_norm | 0.66 | 0.73 | 0.72 | 0.67 | 0.73 |
| openbookqa/acc | 0.29 | 0.29 | 0.30 | 0.27 | 0.30 |
| openbookqa/acc_norm | 0.38 | 0.41 | 0.40 | 0.40 | 0.41 |
| piqa/acc | 0.75 | 0.78 | 0.76 | 0.75 | 0.77 |
| piqa/acc_norm | 0.76 | 0.78 | 0.77 | 0.76 | 0.78 |
| record/em | 0.88 | 0.91 | 0.89 | 0.88 | 0.90 |
| record/f1 | 0.89 | 0.91 | 0.90 | 0.89 | 0.90 |
| rte/acc | 0.54 | 0.56 | 0.60 | 0.58 | 0.65 |
| truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.23 | 0.22 | 0.22 |
| truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.35 | 0.35 | 0.35 |
| wic/acc | 0.50 | 0.50 | 0.51 | 0.48 | 0.49 |
| winogrande/acc | 0.64 | 0.68 | 0.67 | 0.62 | 0.67 |
| Average | 0.51 | 0.53 | 0.55 | 0.52 | 0.56 |
We removed the task CB and WSC from our benchmark, as our model performs suspiciously well on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
## Contact
We would love to get feedback from the community. If you have any questions, please open an issue or contact us.
OpenLLaMA is developed by:
[Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research.
*Equal Contribution
## Acknowledgment
We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.
The OpenLLaMA 13B model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.
## Reference
If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:
```
@software{openlm2023openllama,
author = {Geng, Xinyang and Liu, Hao},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
```
@article{touvron2023llama,
title={Llama: Open and efficient foundation language models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
|
openlm-research/open_llama_7b_easylm
|
openlm-research
| 2023-06-16T00:42:46Z | 0 | 3 | null |
[
"dataset:togethercomputer/RedPajama-Data-1T",
"arxiv:2302.13971",
"license:apache-2.0",
"region:us"
] | null | 2023-06-07T09:10:59Z |
---
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T
---
# OpenLLaMA: An Open Reproduction of LLaMA
In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a 7B and 3B model trained on 1T tokens, as well as the preview of a 13B model trained on 600B tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details.
## Weights Release, License and Usage
We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
### Loading the Weights with Hugging Face Transformers
Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that the auto-converted fast tokenizer sometimes gives incorrect tokenizations.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage.
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
model_path = 'openlm-research/open_llama_3b'
# model_path = 'openlm-research/open_llama_7b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
prompt = 'Q: What is the largest animal?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))
```
For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama).
### Evaluating with LM-Eval-Harness
The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below:
```python
tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained(
pretrained if tokenizer is None else tokenizer,
revision=revision + ("/" + subfolder if subfolder is not None else ""),
use_fast=False
)
```
### Loading the Weights with EasyLM
For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights. Note that we use BOS (beginning of sentence) token (id=1) during training, so it is best to prepend this token for best performance during few-shot evaluation.
## Dataset and Training
We train our models on the [RedPajama](https://www.together.xyz/blog/redpajama) dataset released by [Together](https://www.together.xyz/), which is a reproduction of the LLaMA training dataset containing over 1.2 trillion tokens. We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs the RedPajama dataset rather than the one utilized by the original LLaMA.
We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and [fully sharded data parallelism (also know as ZeRO stage 3)](https://engineering.fb.com/2021/07/15/open-source/fsdp/) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.
## Evaluation
We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).
The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
| **Task/Metric** | GPT-J 6B | LLaMA 7B | OpenLLaMA 7B | OpenLLaMA 3B | OpenLLaMA 13B 600BT |
| ---------------------- | -------- | -------- | ------------ | ------------ | ------------------- |
| anli_r1/acc | 0.32 | 0.35 | 0.33 | 0.33 | 0.33 |
| anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.32 | 0.35 |
| anli_r3/acc | 0.35 | 0.37 | 0.38 | 0.35 | 0.38 |
| arc_challenge/acc | 0.34 | 0.39 | 0.37 | 0.34 | 0.39 |
| arc_challenge/acc_norm | 0.37 | 0.41 | 0.38 | 0.37 | 0.42 |
| arc_easy/acc | 0.67 | 0.68 | 0.72 | 0.69 | 0.74 |
| arc_easy/acc_norm | 0.62 | 0.52 | 0.68 | 0.65 | 0.70 |
| ddboolq/acc | 0.50 | 0.56 | 0.53 | 0.49 | 0.71 |
| hellaswag/acc | 0.36 | 0.36 | 0.63 | 0.43 | 0.54 |
| hellaswag/acc_norm | 0.66 | 0.73 | 0.72 | 0.67 | 0.73 |
| openbookqa/acc | 0.29 | 0.29 | 0.30 | 0.27 | 0.30 |
| openbookqa/acc_norm | 0.38 | 0.41 | 0.40 | 0.40 | 0.41 |
| piqa/acc | 0.75 | 0.78 | 0.76 | 0.75 | 0.77 |
| piqa/acc_norm | 0.76 | 0.78 | 0.77 | 0.76 | 0.78 |
| record/em | 0.88 | 0.91 | 0.89 | 0.88 | 0.90 |
| record/f1 | 0.89 | 0.91 | 0.90 | 0.89 | 0.90 |
| rte/acc | 0.54 | 0.56 | 0.60 | 0.58 | 0.65 |
| truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.23 | 0.22 | 0.22 |
| truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.35 | 0.35 | 0.35 |
| wic/acc | 0.50 | 0.50 | 0.51 | 0.48 | 0.49 |
| winogrande/acc | 0.64 | 0.68 | 0.67 | 0.62 | 0.67 |
| Average | 0.51 | 0.53 | 0.55 | 0.52 | 0.56 |
We removed the task CB and WSC from our benchmark, as our model performs suspiciously well on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
## Contact
We would love to get feedback from the community. If you have any questions, please open an issue or contact us.
OpenLLaMA is developed by:
[Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research.
*Equal Contribution
## Acknowledgment
We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.
The OpenLLaMA 13B model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.
## Reference
If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:
```
@software{openlm2023openllama,
author = {Geng, Xinyang and Liu, Hao},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
```
@article{touvron2023llama,
title={Llama: Open and efficient foundation language models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
|
AhmedMEGZ/whisper-finetuned
|
AhmedMEGZ
| 2023-06-16T00:35:31Z | 85 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-15T22:32:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-finetuned
This model is a fine-tuned version of [openai/whisper-base.en](https://huggingface.co/openai/whisper-base.en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1346
- Wer: 7.2378
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6697 | 0.27 | 200 | 0.6175 | 9.3248 |
| 0.1957 | 0.54 | 400 | 0.1761 | 8.2947 |
| 0.1476 | 0.81 | 600 | 0.1458 | 7.5990 |
| 0.0939 | 1.09 | 800 | 0.1372 | 7.4920 |
| 0.086 | 1.36 | 1000 | 0.1346 | 7.2378 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
yigitkucuk/FLS
|
yigitkucuk
| 2023-06-16T00:15:49Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T00:14:40Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: FLS
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.56 +/- 0.50
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake**
This is a trained model of a **Q-Learning** agent playing **FrozenLake** .
## Usage
```python
model = load_from_hub(repo_id="yigitkucuk/FLS", filename="q-learning.pkl")
env = gym.make(model["env_id"])
```
|
yigitkucuk/FLNS
|
yigitkucuk
| 2023-06-16T00:11:37Z | 0 | 0 | null |
[
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T00:08:44Z |
---
tags:
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: FLNS
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake**
This is a trained model of a **Q-Learning** agent playing **FrozenLake** .
## Usage
```python
model = load_from_hub(repo_id="yigitkucuk/FLNS", filename="q-learning.pkl")
env = gym.make(model["env_id"])
```
|
jetro30087/vicuna-Wizard-7B-Uncensored-linux-q3f16_0
|
jetro30087
| 2023-06-15T23:50:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-15T21:54:08Z |
Model Card for vicuna-Wizard-7B-Uncensored-linux-q3f16_0
Model Description
This Language Model (vicuna-Wizard-7B-Uncensored-linux-q3f16_0) is based on Facebook's "Llama" 7B parameter model, trained on the Wizard-Vicuna uncensored dataset under a non-commercial license. It was specifically developed and formatted for use within the MLC-LLM project, which you can find more details about at MLC-LLM project URL.
The model is designed for research and general text generation purposes. Thanks to MLC-LLM's Vulkan compatibility, the model is capable of working on both Nvidia and AMD graphics cards.
Model Usage
The vicuna-Wizard-7B-Uncensored-q3f16_0 model can generate human-like text that's useful for a variety of purposes, including but not limited to research, chatbots, writing aids, and more. You can use the model through MLC-LLM chat by copying it to the mlc-chat/dist folder of a compile MLC-Chat client.
Limitations and Bias
Although the model is capable of generating high-quality text, it is important to note that it is not perfect. Here are some potential limitations and biases:
Output quality: Although trained on a large dataset, the model may occasionally produce text that is nonsensical or does not align with the input prompt.
Biases in the data: The model has been trained on the Wizard-Vicuna uncensored dataset, and as such, it may have inherited biases present in this data. Despite our best efforts to minimize this, it may reflect biases in terms of gender, race, age, or other aspects.
Safety and content: The uncensored nature of the training dataset means that the model could potentially produce text that some people find offensive, inappropriate, or politically biased. We recommend using this model with care, especially in environments with young users or those who might be affected by such content.
Incorrect information: The model generates text based on patterns it learned during training and does not have access to real-world knowledge or updates beyond its training cut-off. As a result, the information it provides should always be verified for accuracy.
Ethical Considerations and Safety
While using this model, consider the following:
Always verify the information provided by the model with reliable external sources before using it to make decisions or for factual reference.
Monitor the output of the model for any potentially inappropriate or harmful content, especially if it is being used in a public or sensitive setting.
Keep in mind the potential biases inherited from the training data and account for these when interpreting the output.
Disclaimer
This model is provided as-is, and the developers make no warranties regarding its performance, appropriateness, or accuracy. Use it at your own risk.
license: othertions](https://mlc.ai/mlc-llm/docs/tutorials/runtime/cpp.html) for details.
|
DionTimmer/controlnet_qrcode-control_v11p_sd21
|
DionTimmer
| 2023-06-15T23:37:20Z | 141 | 62 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"controlnet",
"image-to-image",
"en",
"license:openrail++",
"region:us"
] |
image-to-image
| 2023-06-15T21:50:38Z |
---
tags:
- stable-diffusion
- controlnet
- image-to-image
license: openrail++
language:
- en
pipeline_tag: image-to-image
---
# QR Code Conditioned ControlNet Models for Stable Diffusion 2.1

## Model Description
This repo holds the safetensors & diffusers versions of the QR code conditioned ControlNet for Stable Diffusion v2.1.
The Stable Diffusion 2.1 version is marginally more effective, as it was developed to address my specific needs. However, a 1.5 version model was also trained on the same dataset for those who are using the older version.
## How to use with diffusers
```bash
pip -q install diffusers transformers accelerate torch xformers
```
```python
import torch
from PIL import Image
from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, DDIMScheduler
from diffusers.utils import load_image
controlnet = ControlNetModel.from_pretrained("DionTimmer/controlnet_qrcode-control_v11p_sd21",
torch_dtype=torch.float16)
pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-1",
controlnet=controlnet,
safety_checker=None,
torch_dtype=torch.float16
)
pipe.enable_xformers_memory_efficient_attention()
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
def resize_for_condition_image(input_image: Image, resolution: int):
input_image = input_image.convert("RGB")
W, H = input_image.size
k = float(resolution) / min(H, W)
H *= k
W *= k
H = int(round(H / 64.0)) * 64
W = int(round(W / 64.0)) * 64
img = input_image.resize((W, H), resample=Image.LANCZOS)
return img
# play with guidance_scale, controlnet_conditioning_scale and strength to make a valid QR Code Image
# qr code image
source_image = load_image("https://s3.amazonaws.com/moonup/production/uploads/6064e095abd8d3692e3e2ed6/A_RqHaAM6YHBodPLwqtjn.png")
# initial image, anything
init_image = load_image("https://s3.amazonaws.com/moonup/production/uploads/noauth/KfMBABpOwIuNolv1pe3qX.jpeg")
condition_image = resize_for_condition_image(source_image, 768)
init_image = resize_for_condition_image(init_image, 768)
generator = torch.manual_seed(123121231)
image = pipe(prompt="a bilboard in NYC with a qrcode",
negative_prompt="ugly, disfigured, low quality, blurry, nsfw",
image=init_image,
control_image=condition_image,
width=768,
height=768,
guidance_scale=20,
controlnet_conditioning_scale=1.5,
generator=generator,
strength=0.9,
num_inference_steps=150,
)
image.images[0]
```
## Performance and Limitations
These models perform quite well in most cases, but please note that they are not 100% accurate. In some instances, the QR code shape might not come through as expected. You can increase the ControlNet weight to emphasize the QR code shape. However, be cautious as this might negatively impact the style of your output.**To optimize for scanning, please generate your QR codes with correction mode 'H' (30%).**
To balance between style and shape, a gentle fine-tuning of the control weight might be required based on the individual input and the desired output, aswell as the correct prompt. Some prompts do not work until you increase the weight by a lot. The process of finding the right balance between these factors is part art and part science. For the best results, it is recommended to generate your artwork at a resolution of 768. This allows for a higher level of detail in the final product, enhancing the quality and effectiveness of the QR code-based artwork.
## Installation
The simplest way to use this is to place the .safetensors model and its .yaml config file in the folder where your other controlnet models are installed, which varies per application.
For usage in auto1111 they can be placed in the webui/models/ControlNet folder. They can be loaded using the controlnet webui extension which you can install through the extensions tab in the webui (https://github.com/Mikubill/sd-webui-controlnet). Make sure to enable your controlnet unit and set your input image as the QR code. Set the model to either the SD2.1 or 1.5 version depending on your base stable diffusion model, or it will error. No pre-processor is needed, though you can use the invert pre-processor for a different variation of results. 768 is the preferred resolution for generation since it allows for more detail.
Make sure to look up additional info on how to use controlnet if you get stuck, once you have the webui up and running its really easy to install the controlnet extension aswell.
|
flobbit/ohbugger2k
|
flobbit
| 2023-06-15T23:34:34Z | 7 | 0 |
fastai
|
[
"fastai",
"en",
"image classification",
"image-classification",
"doi:10.57967/hf/1005",
"license:apache-2.0",
"model-index",
"region:us"
] |
image-classification
| 2023-06-15T20:50:37Z |
---
license: apache-2.0
tags:
- en
- image classification
- fastai
model-index:
- name: ohbugger2k by flobbit
results:
- task:
name: image classification
type: image-classification
metrics:
- name: accuracy
type: acc
num_train_epochs: 7
learning_rate: 0.002
value: 46
metrics:
- accuracy
pipeline_tag: image-classification
---
# Oh! Bugger! 2k Insect Classification
## Model description
The model is used to classify insect images into one of the 2000 North American species/classes. `resnet18` was used for training.
## Intended uses & limitations
The model was trained on 130133 insect images spread over 2000 species with a minimum of 25 pics in a class. Some classes were trained on too few images. The training pics were not screened for quality. For example, giving the model a picture of a human finger will most likely return an insect species that had finger in a training pic. Or a pic of a bug on the siding of a house will likely return that type of match.
There are likely other biases in the training data.
## Training and evaluation data
The images used in training were scraped from the internet.
|
aldatascience/Video_Spotter
|
aldatascience
| 2023-06-15T23:23:00Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-06-15T23:23:00Z |
---
license: bigscience-openrail-m
---
|
gokuls/sa_BERT_48_qnli
|
gokuls
| 2023-06-15T23:22:15Z | 131 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T21:23:01Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: sa_BERT_48_qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.6983342485813655
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_BERT_48_qnli
This model is a fine-tuned version of [gokuls/bert_base_48](https://huggingface.co/gokuls/bert_base_48) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6317
- Accuracy: 0.6983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.674 | 1.0 | 1092 | 0.6500 | 0.6253 |
| 0.6353 | 2.0 | 2184 | 0.6513 | 0.6244 |
| 0.5987 | 3.0 | 3276 | 0.6552 | 0.6357 |
| 0.5429 | 4.0 | 4368 | 0.6414 | 0.6760 |
| 0.465 | 5.0 | 5460 | 0.6317 | 0.6983 |
| 0.3904 | 6.0 | 6552 | 0.6376 | 0.7146 |
| 0.3215 | 7.0 | 7644 | 0.7152 | 0.7137 |
| 0.2584 | 8.0 | 8736 | 0.7690 | 0.7278 |
| 0.2096 | 9.0 | 9828 | 0.8507 | 0.7128 |
| 0.1685 | 10.0 | 10920 | 0.9555 | 0.7201 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gokuls/hBERTv2_new_pretrain_48_KD_w_init_qnli
|
gokuls
| 2023-06-15T23:05:18Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T21:40:29Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv2_new_pretrain_48_KD_w_init_qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.6417719201903715
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_KD_w_init_qnli
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_48_KD_wt_init](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_48_KD_wt_init) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6306
- Accuracy: 0.6418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6733 | 1.0 | 819 | 0.6643 | 0.5913 |
| 0.641 | 2.0 | 1638 | 0.6306 | 0.6418 |
| 0.5952 | 3.0 | 2457 | 0.6488 | 0.6377 |
| 0.5439 | 4.0 | 3276 | 0.6661 | 0.6302 |
| 0.4907 | 5.0 | 4095 | 0.6937 | 0.6253 |
| 0.4364 | 6.0 | 4914 | 0.7381 | 0.6297 |
| 0.3825 | 7.0 | 5733 | 0.8475 | 0.6240 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
ghze/q-FrozenLake-v1-4x4-noSlippery
|
ghze
| 2023-06-15T22:46:49Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T22:46:46Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ghze/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
gokuls/sa_BERT_24_qnli
|
gokuls
| 2023-06-15T22:39:10Z | 130 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T21:16:03Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: sa_BERT_24_qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.6218195130880468
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_BERT_24_qnli
This model is a fine-tuned version of [gokuls/bert_base_24](https://huggingface.co/gokuls/bert_base_24) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6492
- Accuracy: 0.6218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6745 | 1.0 | 1092 | 0.6686 | 0.5817 |
| 0.6338 | 2.0 | 2184 | 0.6492 | 0.6218 |
| 0.5909 | 3.0 | 3276 | 0.6560 | 0.6251 |
| 0.5407 | 4.0 | 4368 | 0.7246 | 0.6269 |
| 0.4732 | 5.0 | 5460 | 0.6612 | 0.6421 |
| 0.3999 | 6.0 | 6552 | 0.7506 | 0.6410 |
| 0.3203 | 7.0 | 7644 | 0.9162 | 0.6306 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
HaiderAUT/poca-SoccerTwos
|
HaiderAUT
| 2023-06-15T22:35:58Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-06-15T22:24:47Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: HaiderAUT/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gokuls/sa_BERT_no_pretrain_mnli
|
gokuls
| 2023-06-15T22:16:47Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T14:41:40Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: sa_BERT_no_pretrain_mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
config: mnli
split: validation_matched
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.6700569568755086
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_BERT_no_pretrain_mnli
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7747
- Accuracy: 0.6701
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.9765 | 1.0 | 4091 | 0.9090 | 0.5823 |
| 0.8799 | 2.0 | 8182 | 0.8625 | 0.6123 |
| 0.8193 | 3.0 | 12273 | 0.8227 | 0.6362 |
| 0.7551 | 4.0 | 16364 | 0.7929 | 0.6542 |
| 0.6961 | 5.0 | 20455 | 0.7901 | 0.6643 |
| 0.6403 | 6.0 | 24546 | 0.8298 | 0.6687 |
| 0.5831 | 7.0 | 28637 | 0.8135 | 0.6701 |
| 0.5224 | 8.0 | 32728 | 0.8831 | 0.6718 |
| 0.4602 | 9.0 | 36819 | 0.9055 | 0.6652 |
| 0.4003 | 10.0 | 40910 | 0.9812 | 0.6603 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
BlinkDL/rwkv-4-pile-14b
|
BlinkDL
| 2023-06-15T21:55:03Z | 0 | 173 | null |
[
"pytorch",
"text-generation",
"causal-lm",
"rwkv",
"en",
"dataset:the_pile",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2022-10-20T11:47:59Z |
---
language:
- en
tags:
- pytorch
- text-generation
- causal-lm
- rwkv
license: apache-2.0
datasets:
- the_pile
---
# RWKV-4 14B
[UPDATE: Try RWKV-4-World (https://huggingface.co/BlinkDL/rwkv-4-world) for generation & chat & code in 100+ world languages, with great English zero-shot & in-context learning ability too.]
## Model Description
RWKV-4 14B is a L40-D5120 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.
args.n_layer = 40
args.n_embd = 5120
Use https://github.com/BlinkDL/ChatRWKV to run it.
RWKV-4-Pile-14B-2023xxxx-ctx8192-testxxx.pth : Fine-tuned to ctx_len 8192.
* The best general model.
################################
"Raven": RWKV alpaca+vicuna-style model: https://huggingface.co/BlinkDL/rwkv-4-raven (highly recommended)
It is a strong chat model too. You can use +i for "Alpaca Instruct" in latest ChatRWKV v2. Examples:
```
+i Explain the following metaphor: "Life is like cats".
+i write a python function to read data from an excel file.
```
################################
RWKV-4-Pile-14B-20230213-8019.pth : Trained on the Pile for 331B tokens
* Pile loss 1.7579 (ctx_len 1024)
* LAMBADA ppl 3.81, acc 71.05%
* PIQA acc 77.42%
* SC2016 acc 75.57%
* Hellaswag acc_norm 70.24%
* WinoGrande acc 62.98%
|
apopam/ppo-LunarLander-v2
|
apopam
| 2023-06-15T21:50:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T21:50:14Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -968.42 +/- 454.65
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
GyanShashwat/distilbert-base-uncased-finetuned-squad-with-customised-input
|
GyanShashwat
| 2023-06-15T21:44:45Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-15T19:49:09Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: GyanShashwat/distilbert-base-uncased-finetuned-squad-with-customised-input
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# GyanShashwat/distilbert-base-uncased-finetuned-squad-with-customised-input
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9722
- Train End Logits Accuracy: 0.7309
- Train Start Logits Accuracy: 0.6905
- Validation Loss: 1.1232
- Validation End Logits Accuracy: 0.6943
- Validation Start Logits Accuracy: 0.6607
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11066, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.5056 | 0.6077 | 0.5696 | 1.1629 | 0.6844 | 0.6471 | 0 |
| 0.9722 | 0.7309 | 0.6905 | 1.1232 | 0.6943 | 0.6607 | 1 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gokuls/hBERTv1_new_pretrain_48_KD_w_init_mrpc
|
gokuls
| 2023-06-15T21:42:30Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T21:34:41Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: hBERTv1_new_pretrain_48_KD_w_init_mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.7156862745098039
- name: F1
type: f1
value: 0.8104575163398692
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_KD_w_init_mrpc
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_48_KD_wt_init](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_48_KD_wt_init) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5878
- Accuracy: 0.7157
- F1: 0.8105
- Combined Score: 0.7631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6514 | 1.0 | 29 | 0.6205 | 0.6887 | 0.8146 | 0.7517 |
| 0.619 | 2.0 | 58 | 0.6165 | 0.6618 | 0.7366 | 0.6992 |
| 0.6208 | 3.0 | 87 | 0.5878 | 0.7157 | 0.8105 | 0.7631 |
| 0.578 | 4.0 | 116 | 0.5952 | 0.7132 | 0.7986 | 0.7559 |
| 0.5612 | 5.0 | 145 | 0.5910 | 0.6936 | 0.7899 | 0.7418 |
| 0.4844 | 6.0 | 174 | 0.6261 | 0.6520 | 0.7290 | 0.6905 |
| 0.4281 | 7.0 | 203 | 0.6146 | 0.7010 | 0.7932 | 0.7471 |
| 0.3919 | 8.0 | 232 | 0.7273 | 0.6838 | 0.7795 | 0.7317 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gokuls/hBERTv2_new_pretrain_48_KD_w_init_mrpc
|
gokuls
| 2023-06-15T21:39:57Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T21:33:33Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: hBERTv2_new_pretrain_48_KD_w_init_mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.6838235294117647
- name: F1
type: f1
value: 0.8122270742358079
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_KD_w_init_mrpc
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_48_KD_wt_init](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_48_KD_wt_init) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6240
- Accuracy: 0.6838
- F1: 0.8122
- Combined Score: 0.7480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6725 | 1.0 | 29 | 0.6240 | 0.6838 | 0.8122 | 0.7480 |
| 0.6382 | 2.0 | 58 | 0.6274 | 0.6838 | 0.8122 | 0.7480 |
| 0.6384 | 3.0 | 87 | 0.6279 | 0.6838 | 0.8122 | 0.7480 |
| 0.6437 | 4.0 | 116 | 0.6346 | 0.6838 | 0.8122 | 0.7480 |
| 0.6386 | 5.0 | 145 | 0.6242 | 0.6838 | 0.8122 | 0.7480 |
| 0.6364 | 6.0 | 174 | 0.6273 | 0.6838 | 0.8122 | 0.7480 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gokuls/hBERTv1_new_pretrain_48_KD_w_init_cola
|
gokuls
| 2023-06-15T21:34:19Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T21:23:11Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
- accuracy
model-index:
- name: hBERTv1_new_pretrain_48_KD_w_init_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
- name: Accuracy
type: accuracy
value: 0.6912751793861389
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_KD_w_init_cola
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_48_KD_wt_init](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_48_KD_wt_init) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6182
- Matthews Correlation: 0.0
- Accuracy: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.6338 | 1.0 | 67 | 0.6182 | 0.0 | 0.6913 |
| 0.6194 | 2.0 | 134 | 0.6405 | 0.0 | 0.6913 |
| 0.6131 | 3.0 | 201 | 0.6188 | 0.0 | 0.6913 |
| 0.6128 | 4.0 | 268 | 0.6199 | 0.0 | 0.6913 |
| 0.6281 | 5.0 | 335 | 0.6197 | 0.0 | 0.6913 |
| 0.6146 | 6.0 | 402 | 0.6196 | 0.0 | 0.6913 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
hangeol/42
|
hangeol
| 2023-06-15T21:25:51Z | 39 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-15T20:33:10Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - hangeol/42
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
gokuls/hBERTv1_new_pretrain_48_KD_w_init_sst2
|
gokuls
| 2023-06-15T21:22:52Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T20:31:50Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_new_pretrain_48_KD_w_init_sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8463302752293578
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_KD_w_init_sst2
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_48_KD_wt_init](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_48_KD_wt_init) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3751
- Accuracy: 0.8463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3378 | 1.0 | 527 | 0.3751 | 0.8463 |
| 0.2032 | 2.0 | 1054 | 0.5684 | 0.8062 |
| 0.1549 | 3.0 | 1581 | 0.4930 | 0.8257 |
| 0.1241 | 4.0 | 2108 | 0.5828 | 0.8360 |
| 0.1048 | 5.0 | 2635 | 0.4589 | 0.8142 |
| 0.0872 | 6.0 | 3162 | 0.5902 | 0.8268 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gokuls/sa_BERT_48_mrpc
|
gokuls
| 2023-06-15T21:22:25Z | 131 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T21:15:59Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: sa_BERT_48_mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.6519607843137255
- name: F1
type: f1
value: 0.726923076923077
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_BERT_48_mrpc
This model is a fine-tuned version of [gokuls/bert_base_48](https://huggingface.co/gokuls/bert_base_48) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6401
- Accuracy: 0.6520
- F1: 0.7269
- Combined Score: 0.6894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6588 | 1.0 | 39 | 0.6401 | 0.6520 | 0.7269 | 0.6894 |
| 0.5982 | 2.0 | 78 | 0.6441 | 0.6863 | 0.7801 | 0.7332 |
| 0.4614 | 3.0 | 117 | 0.6615 | 0.6740 | 0.7787 | 0.7264 |
| 0.3148 | 4.0 | 156 | 0.7447 | 0.6765 | 0.7770 | 0.7267 |
| 0.226 | 5.0 | 195 | 0.9718 | 0.6054 | 0.6957 | 0.6505 |
| 0.1566 | 6.0 | 234 | 1.2879 | 0.5564 | 0.6268 | 0.5916 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gokuls/hBERTv2_new_pretrain_48_KD_w_init_sst2
|
gokuls
| 2023-06-15T21:20:01Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T20:31:47Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv2_new_pretrain_48_KD_w_init_sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8394495412844036
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_KD_w_init_sst2
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_48_KD_wt_init](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_48_KD_wt_init) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4188
- Accuracy: 0.8394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3594 | 1.0 | 527 | 0.4188 | 0.8394 |
| 0.2344 | 2.0 | 1054 | 0.5086 | 0.8337 |
| 0.2012 | 3.0 | 1581 | 0.5127 | 0.8177 |
| 0.1723 | 4.0 | 2108 | 0.4814 | 0.8200 |
| 0.1425 | 5.0 | 2635 | 0.4872 | 0.8314 |
| 0.12 | 6.0 | 3162 | 0.5835 | 0.8222 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gokuls/add_BERT_48_cola
|
gokuls
| 2023-06-15T21:10:53Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T20:56:59Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
- accuracy
model-index:
- name: add_BERT_48_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
- name: Accuracy
type: accuracy
value: 0.6912751793861389
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# add_BERT_48_cola
This model is a fine-tuned version of [gokuls/add_bert_12_layer_model_complete_training_new_48](https://huggingface.co/gokuls/add_bert_12_layer_model_complete_training_new_48) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6179
- Matthews Correlation: 0.0
- Accuracy: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.6211 | 1.0 | 67 | 0.6193 | 0.0 | 0.6913 |
| 0.6175 | 2.0 | 134 | 0.6525 | 0.0 | 0.6913 |
| 0.6147 | 3.0 | 201 | 0.6190 | 0.0 | 0.6913 |
| 0.6126 | 4.0 | 268 | 0.6182 | 0.0 | 0.6913 |
| 0.61 | 5.0 | 335 | 0.6179 | 0.0 | 0.6913 |
| 0.6104 | 6.0 | 402 | 0.6184 | 0.0 | 0.6913 |
| 0.6108 | 7.0 | 469 | 0.6223 | 0.0 | 0.6913 |
| 0.6108 | 8.0 | 536 | 0.6193 | 0.0 | 0.6913 |
| 0.6093 | 9.0 | 603 | 0.6290 | 0.0 | 0.6913 |
| 0.609 | 10.0 | 670 | 0.6255 | 0.0 | 0.6913 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
radyad/valrad_qa_model
|
radyad
| 2023-06-15T21:10:47Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:mlqa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-15T20:46:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mlqa
model-index:
- name: valrad_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# valrad_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the mlqa dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 355 | 2.3752 |
| 3.1802 | 2.0 | 710 | 1.8748 |
| 1.6816 | 3.0 | 1065 | 1.8117 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
debajyotidasgupta/ppo-Huggy
|
debajyotidasgupta
| 2023-06-15T20:50:10Z | 12 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-15T20:50:04Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: debajyotidasgupta/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DarkAirforce/ppo-LunarLander-v2
|
DarkAirforce
| 2023-06-15T20:48:24Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T19:39:07Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.10 +/- 11.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
rishabh063/civittry
|
rishabh063
| 2023-06-15T20:38:01Z | 0 | 0 | null |
[
"text-to-image",
"region:us"
] |
text-to-image
| 2023-06-15T20:03:36Z |
---
pipeline_tag: text-to-image
---
|
hangeol/32
|
hangeol
| 2023-06-15T20:32:53Z | 30 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-15T19:44:37Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - hangeol/32
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
Olegiy/ppo-Huggy
|
Olegiy
| 2023-06-15T20:07:58Z | 11 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-15T20:07:55Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Olegiy/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Teunis89/ppo-Huggy
|
Teunis89
| 2023-06-15T20:06:07Z | 14 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-15T20:06:03Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Teunis89/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
davidmunechika/coreml-openjourney-v4
|
davidmunechika
| 2023-06-15T20:05:30Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T16:38:58Z |
---
license: creativeml-openrail-m
---
|
emresvd/u197
|
emresvd
| 2023-06-15T20:00:59Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2023-06-15T20:00:53Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
davidmunechika/coreml-stable-diffusion-2-base
|
davidmunechika
| 2023-06-15T20:00:03Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-14T21:23:17Z |
---
license: creativeml-openrail-m
---
|
gfalcao/ldsc2-0t7
|
gfalcao
| 2023-06-15T19:42:12Z | 37 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-15T19:30:34Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### ldsc2.0T7 Dreambooth model trained by gfalcao with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
gokuls/hBERTv2_new_pretrain_48_emb_com_stsb
|
gokuls
| 2023-06-15T19:15:45Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T18:55:14Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: hBERTv2_new_pretrain_48_emb_com_stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
config: stsb
split: validation
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.30729552140330846
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_emb_com_stsb
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_emb_compress_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_emb_compress_48) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0889
- Pearson: 0.3123
- Spearmanr: 0.3073
- Combined Score: 0.3098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 2.398 | 1.0 | 45 | 3.0621 | 0.0972 | 0.1007 | 0.0990 |
| 2.0392 | 2.0 | 90 | 2.3674 | 0.1058 | 0.1011 | 0.1034 |
| 1.967 | 3.0 | 135 | 2.2296 | 0.1449 | 0.1432 | 0.1441 |
| 1.8176 | 4.0 | 180 | 2.6036 | 0.2055 | 0.2169 | 0.2112 |
| 1.6744 | 5.0 | 225 | 2.2119 | 0.2516 | 0.2534 | 0.2525 |
| 1.4727 | 6.0 | 270 | 2.0889 | 0.3123 | 0.3073 | 0.3098 |
| 1.1852 | 7.0 | 315 | 2.6372 | 0.3609 | 0.3543 | 0.3576 |
| 0.9895 | 8.0 | 360 | 2.5881 | 0.3312 | 0.3322 | 0.3317 |
| 0.8254 | 9.0 | 405 | 2.1746 | 0.3991 | 0.3974 | 0.3983 |
| 0.6759 | 10.0 | 450 | 2.7671 | 0.3693 | 0.3663 | 0.3678 |
| 0.558 | 11.0 | 495 | 2.5954 | 0.3967 | 0.3942 | 0.3955 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
webstels/nekta_ai_v2
|
webstels
| 2023-06-15T19:08:25Z | 132 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-15T18:59:10Z |
---
tags:
- generated_from_trainer
model-index:
- name: nekta_ai_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nekta_ai_v2
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 2.7725 |
| No log | 2.0 | 2 | 2.0761 |
| No log | 3.0 | 3 | 1.6521 |
| No log | 4.0 | 4 | 1.3916 |
| No log | 5.0 | 5 | 1.2775 |
| No log | 6.0 | 6 | 1.1982 |
| No log | 7.0 | 7 | 1.1158 |
| No log | 8.0 | 8 | 1.0482 |
| No log | 9.0 | 9 | 1.0100 |
| No log | 10.0 | 10 | 0.9798 |
| No log | 11.0 | 11 | 0.9576 |
| No log | 12.0 | 12 | 0.9373 |
| No log | 13.0 | 13 | 0.9129 |
| No log | 14.0 | 14 | 0.8900 |
| No log | 15.0 | 15 | 0.8704 |
| No log | 16.0 | 16 | 0.8561 |
| No log | 17.0 | 17 | 0.8436 |
| No log | 18.0 | 18 | 0.8343 |
| No log | 19.0 | 19 | 0.8197 |
| No log | 20.0 | 20 | 0.7882 |
| No log | 21.0 | 21 | 0.7567 |
| No log | 22.0 | 22 | 0.7370 |
| No log | 23.0 | 23 | 0.7239 |
| No log | 24.0 | 24 | 0.7099 |
| No log | 25.0 | 25 | 0.6934 |
| No log | 26.0 | 26 | 0.6758 |
| No log | 27.0 | 27 | 0.6582 |
| No log | 28.0 | 28 | 0.6439 |
| No log | 29.0 | 29 | 0.6290 |
| No log | 30.0 | 30 | 0.6120 |
| No log | 31.0 | 31 | 0.5951 |
| No log | 32.0 | 32 | 0.5779 |
| No log | 33.0 | 33 | 0.5587 |
| No log | 34.0 | 34 | 0.5395 |
| No log | 35.0 | 35 | 0.5184 |
| No log | 36.0 | 36 | 0.4945 |
| No log | 37.0 | 37 | 0.4751 |
| No log | 38.0 | 38 | 0.4597 |
| No log | 39.0 | 39 | 0.4462 |
| No log | 40.0 | 40 | 0.4361 |
| No log | 41.0 | 41 | 0.4264 |
| No log | 42.0 | 42 | 0.4184 |
| No log | 43.0 | 43 | 0.4111 |
| No log | 44.0 | 44 | 0.4064 |
| No log | 45.0 | 45 | 0.4006 |
| No log | 46.0 | 46 | 0.3948 |
| No log | 47.0 | 47 | 0.3892 |
| No log | 48.0 | 48 | 0.3857 |
| No log | 49.0 | 49 | 0.3833 |
| No log | 50.0 | 50 | 0.3819 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Tokenizers 0.13.3
|
DesiAEye/Madhubala
|
DesiAEye
| 2023-06-15T19:07:58Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T19:03:16Z |
---
license: creativeml-openrail-m
---
Support on Patreon: https://www.patreon.com/DesiAEye
Join Discord: https://discord.gg/TGWvDGVt
Introducing Madhubala, a remarkable LoRA model trained on the face of the iconic Indian actress, Madhubala. This extraordinary model is designed to generate stunning photorealistic and semirealistic images of the legendary celebrity. With the trigger word "Madhubala woman" witness the artistry of this AI-powered creation.
Celebrate the beauty and charisma of Madhubala, the epitome of Indian cinema, through the intricate details and lifelike expressions captured by this exceptional model. Whether you're a fan of classic Indian cinema or appreciate the elegance of a talented actress, Madhubala will captivate your imagination.
Embrace the essence of this talented Indian woman and indulge in the artistry of Madhubala. Explore the magic of photorealism and unlock a world of creativity and inspiration with this extraordinary LoRA model.
|
asapp/sew-d-tiny-100k-ft-ls100h
|
asapp
| 2023-06-15T19:07:05Z | 98,517 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"sew-d",
"automatic-speech-recognition",
"audio",
"speech",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- librispeech_asr
tags:
- audio
- speech
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: sew-d-tiny-100k-ft-ls100h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 10.47
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 22.73
---
# SEW-D-tiny
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, SEWDForCTC
from datasets import load_dataset
import soundfile as sf
import torch
# load the model and preprocessor
processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h")
model = SEWDForCTC.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h")
# load the dummy dataset with speech samples
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# preprocess
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **asapp/sew-d-tiny-100k-ft-ls100h** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import SEWDForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = SEWDForCTC.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h")
def map_to_pred(batch):
input_values = processor(batch["audio"][0]["array"], sampling_rate=16000,
return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
| --- | --- |
| 10.47 | 22.73 |
|
hangeol/3
|
hangeol
| 2023-06-15T18:58:18Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-14T19:07:04Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - hangeol/3
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
lrthomps/poca-SoccerTwos
|
lrthomps
| 2023-06-15T18:54:20Z | 18 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-06-15T18:53:59Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: lrthomps/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gokuls/hBERTv2_new_pretrain_48_emb_com_qqp
|
gokuls
| 2023-06-15T18:47:43Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-14T21:39:47Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: hBERTv2_new_pretrain_48_emb_com_qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
config: qqp
split: validation
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.793668068266139
- name: F1
type: f1
value: 0.7323021628907003
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_emb_com_qqp
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_emb_compress_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_emb_compress_48) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4578
- Accuracy: 0.7937
- F1: 0.7323
- Combined Score: 0.7630
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.5521 | 1.0 | 2843 | 0.5375 | 0.6937 | 0.6680 | 0.6808 |
| 0.5006 | 2.0 | 5686 | 0.4939 | 0.7551 | 0.6743 | 0.7147 |
| 0.469 | 3.0 | 8529 | 0.4866 | 0.7714 | 0.6698 | 0.7206 |
| 0.4561 | 4.0 | 11372 | 0.5096 | 0.7633 | 0.6588 | 0.7110 |
| 0.44 | 5.0 | 14215 | 0.4706 | 0.7813 | 0.6987 | 0.7400 |
| 0.4168 | 6.0 | 17058 | 0.4598 | 0.7867 | 0.7061 | 0.7464 |
| 0.4 | 7.0 | 19901 | 0.4769 | 0.7797 | 0.7154 | 0.7476 |
| 0.3843 | 8.0 | 22744 | 0.4907 | 0.7829 | 0.7122 | 0.7476 |
| 0.3686 | 9.0 | 25587 | 0.4590 | 0.7844 | 0.7303 | 0.7574 |
| 0.3457 | 10.0 | 28430 | 0.4578 | 0.7937 | 0.7323 | 0.7630 |
| 0.3278 | 11.0 | 31273 | 0.4756 | 0.8034 | 0.7251 | 0.7643 |
| 0.3124 | 12.0 | 34116 | 0.4793 | 0.8026 | 0.7349 | 0.7688 |
| 0.2975 | 13.0 | 36959 | 0.4680 | 0.8009 | 0.7392 | 0.7701 |
| 0.2851 | 14.0 | 39802 | 0.4649 | 0.8061 | 0.7328 | 0.7695 |
| 0.273 | 15.0 | 42645 | 0.4699 | 0.7990 | 0.7379 | 0.7685 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
bsmock/tatr-pubtables1m-v1.0
|
bsmock
| 2023-06-15T18:44:41Z | 0 | 12 | null |
[
"table detection",
"table structure recognition",
"table extraction",
"dataset:bsmock/pubtables-1m",
"license:mit",
"region:us"
] | null | 2023-06-02T16:09:54Z |
---
license: mit
datasets:
- bsmock/pubtables-1m
tags:
- table detection
- table structure recognition
- table extraction
---
# Model Card for Model ID
This repo contains the models for:
1) Table detection,
2) Table structure recognition,
trained on the PubTables-1M dataset, using the training details in the paper: ["PubTables-1M: Towards comprehensive table extraction from unstructured documents"](https://openaccess.thecvf.com/content/CVPR2022/html/Smock_PubTables-1M_Towards_Comprehensive_Table_Extraction_From_Unstructured_Documents_CVPR_2022_paper.html)
## Model Details
### Model Description
- **Developed by:** Brandon Smock and Rohith Pesala, while at Microsoft
- **License:** MIT
- **Finetuned from model:** DETR ResNet-18
### Model Sources
Please see the following for more details:
- **Repository:** ["https://github.com/microsoft/table-transformer"](https://github.com/microsoft/table-transformer)
- **Paper:** ["PubTables-1M: Towards comprehensive table extraction from unstructured documents"](https://openaccess.thecvf.com/content/CVPR2022/html/Smock_PubTables-1M_Towards_Comprehensive_Table_Extraction_From_Unstructured_Documents_CVPR_2022_paper.html)
|
hangeol/5
|
hangeol
| 2023-06-15T18:31:13Z | 8 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-14T19:09:01Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - hangeol/5
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
gokuls/add_bert_12_layer_model_complete_training_new_96
|
gokuls
| 2023-06-15T18:23:31Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-13T17:57:05Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: add_bert_12_layer_model_complete_training_new_96
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# add_bert_12_layer_model_complete_training_new_96
This model is a fine-tuned version of [gokuls/add_bert_12_layer_model_complete_training_new_48](https://huggingface.co/gokuls/add_bert_12_layer_model_complete_training_new_48) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.4112
- Accuracy: 0.1893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 5.8144 | 0.08 | 10000 | 5.7474 | 0.1593 |
| 5.7889 | 0.16 | 20000 | 5.7204 | 0.1604 |
| 5.6347 | 0.25 | 30000 | 5.6966 | 0.1623 |
| 5.7138 | 0.33 | 40000 | 5.6725 | 0.1636 |
| 5.6769 | 0.41 | 50000 | 5.6518 | 0.1658 |
| 5.6603 | 0.49 | 60000 | 5.6290 | 0.1686 |
| 5.5852 | 0.57 | 70000 | 5.6076 | 0.1707 |
| 5.6607 | 0.66 | 80000 | 5.5906 | 0.1720 |
| 5.5823 | 0.74 | 90000 | 5.5719 | 0.1739 |
| 5.6124 | 0.82 | 100000 | 5.5543 | 0.1759 |
| 5.6478 | 0.9 | 110000 | 5.5358 | 0.1776 |
| 5.4795 | 0.98 | 120000 | 5.5203 | 0.1787 |
| 5.4557 | 1.07 | 130000 | 5.5028 | 0.1804 |
| 5.5585 | 1.15 | 140000 | 5.4923 | 0.1814 |
| 5.6387 | 1.23 | 150000 | 5.4781 | 0.1825 |
| 5.479 | 1.31 | 160000 | 5.4663 | 0.1833 |
| 5.3951 | 1.39 | 170000 | 5.4512 | 0.1851 |
| 5.5062 | 1.47 | 180000 | 5.4411 | 0.1864 |
| 5.4553 | 1.56 | 190000 | 5.4244 | 0.1881 |
| 5.5461 | 1.64 | 200000 | 5.4112 | 0.1893 |
### Framework versions
- Transformers 4.30.1
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
PraveenJesu/whisper-tiny-zoomrx-v1
|
PraveenJesu
| 2023-06-15T18:21:39Z | 84 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-15T18:17:51Z |
This directory includes a few sample datasets to get you started.
* `california_housing_data*.csv` is California housing data from the 1990 US
Census; more information is available at:
https://developers.google.com/machine-learning/crash-course/california-housing-data-description
* `mnist_*.csv` is a small sample of the
[MNIST database](https://en.wikipedia.org/wiki/MNIST_database), which is
described at: http://yann.lecun.com/exdb/mnist/
* `anscombe.json` contains a copy of
[Anscombe's quartet](https://en.wikipedia.org/wiki/Anscombe%27s_quartet); it
was originally described in
Anscombe, F. J. (1973). 'Graphs in Statistical Analysis'. American
Statistician. 27 (1): 17-21. JSTOR 2682899.
and our copy was prepared by the
[vega_datasets library](https://github.com/altair-viz/vega_datasets/blob/4f67bdaad10f45e3549984e17e1b3088c731503d/vega_datasets/_data/anscombe.json).
|
GyanShashwat/distilbert-base-uncased-finetuned-test-data-v2
|
GyanShashwat
| 2023-06-15T18:12:53Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-15T18:08:45Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: GyanShashwat/distilbert-base-uncased-finetuned-test-data-v2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# GyanShashwat/distilbert-base-uncased-finetuned-test-data-v2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.8903
- Train End Logits Accuracy: 0.0
- Train Start Logits Accuracy: 0.1429
- Epoch: 81
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.01, 'decay_steps': 100, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:-----:|
| 5.9801 | 0.0 | 0.0 | 0 |
| 5.9338 | 0.0 | 0.0 | 1 |
| 5.9744 | 0.0 | 0.0 | 2 |
| 5.9011 | 0.0 | 0.0 | 3 |
| 5.9892 | 0.0 | 0.0 | 4 |
| 6.0409 | 0.0 | 0.0 | 5 |
| 5.8902 | 0.0 | 0.0 | 6 |
| 5.9480 | 0.0 | 0.0 | 7 |
| 6.0100 | 0.0 | 0.0 | 8 |
| 6.0898 | 0.0 | 0.0 | 9 |
| 5.9093 | 0.0 | 0.0 | 10 |
| 5.8435 | 0.0 | 0.0 | 11 |
| 5.9528 | 0.0 | 0.0 | 12 |
| 5.9702 | 0.0 | 0.0 | 13 |
| 6.2079 | 0.0 | 0.0 | 14 |
| 6.0707 | 0.0 | 0.0 | 15 |
| 6.0218 | 0.0 | 0.0 | 16 |
| 5.9175 | 0.0 | 0.0 | 17 |
| 5.8957 | 0.0 | 0.0 | 18 |
| 5.9021 | 0.0 | 0.0 | 19 |
| 6.1419 | 0.0 | 0.0 | 20 |
| 6.0310 | 0.0 | 0.0 | 21 |
| 5.8559 | 0.0 | 0.0 | 22 |
| 5.9768 | 0.0 | 0.0 | 23 |
| 6.0752 | 0.0 | 0.0 | 24 |
| 6.3935 | 0.0 | 0.0 | 25 |
| 6.2257 | 0.0 | 0.0 | 26 |
| 6.2152 | 0.0 | 0.0 | 27 |
| 6.1603 | 0.0 | 0.0 | 28 |
| 6.2708 | 0.0 | 0.0 | 29 |
| 5.9893 | 0.0 | 0.0 | 30 |
| 5.6298 | 0.0 | 0.2857 | 31 |
| 5.9713 | 0.0 | 0.0 | 32 |
| 6.1259 | 0.0 | 0.0 | 33 |
| 6.0831 | 0.0 | 0.0 | 34 |
| 6.1936 | 0.0 | 0.0 | 35 |
| 6.1549 | 0.0 | 0.0 | 36 |
| 6.1610 | 0.0 | 0.0 | 37 |
| 6.1028 | 0.0 | 0.0 | 38 |
| 6.3336 | 0.0 | 0.0 | 39 |
| 6.1848 | 0.0 | 0.0 | 40 |
| 6.1255 | 0.0 | 0.0 | 41 |
| 6.0896 | 0.0 | 0.0 | 42 |
| 6.2798 | 0.0 | 0.0 | 43 |
| 6.2555 | 0.0 | 0.0 | 44 |
| 6.3498 | 0.0 | 0.0 | 45 |
| 6.1329 | 0.0 | 0.0 | 46 |
| 6.1033 | 0.0 | 0.0 | 47 |
| 6.1298 | 0.1429 | 0.0 | 48 |
| 6.1285 | 0.0 | 0.0 | 49 |
| 6.3465 | 0.0 | 0.0 | 50 |
| 6.1177 | 0.0 | 0.0 | 51 |
| 6.1626 | 0.0 | 0.0 | 52 |
| 6.0304 | 0.0 | 0.0 | 53 |
| 6.0605 | 0.1429 | 0.0 | 54 |
| 5.9403 | 0.0 | 0.0 | 55 |
| 5.7870 | 0.0 | 0.0 | 56 |
| 6.1490 | 0.0 | 0.0 | 57 |
| 5.9711 | 0.0 | 0.1429 | 58 |
| 6.0982 | 0.0 | 0.0 | 59 |
| 5.7100 | 0.1429 | 0.0 | 60 |
| 5.9671 | 0.0 | 0.0 | 61 |
| 6.0133 | 0.0 | 0.0 | 62 |
| 6.1473 | 0.0 | 0.0 | 63 |
| 5.8185 | 0.0 | 0.0 | 64 |
| 5.8461 | 0.0 | 0.0 | 65 |
| 5.8286 | 0.1429 | 0.0 | 66 |
| 6.1176 | 0.0 | 0.0 | 67 |
| 6.0289 | 0.0 | 0.0 | 68 |
| 6.0143 | 0.0 | 0.0 | 69 |
| 6.1875 | 0.0 | 0.0 | 70 |
| 6.1716 | 0.0 | 0.0 | 71 |
| 5.8779 | 0.0 | 0.0 | 72 |
| 6.1317 | 0.0 | 0.0 | 73 |
| 6.0170 | 0.0 | 0.0 | 74 |
| 6.0243 | 0.0 | 0.0 | 75 |
| 5.9871 | 0.0 | 0.0 | 76 |
| 6.0451 | 0.0 | 0.0 | 77 |
| 6.0820 | 0.0 | 0.0 | 78 |
| 6.1378 | 0.0 | 0.0 | 79 |
| 6.0649 | 0.0 | 0.0 | 80 |
| 5.8903 | 0.0 | 0.1429 | 81 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
law-ai/CustomInLawBERT
|
law-ai
| 2023-06-15T18:03:18Z | 119 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"legal",
"en",
"arxiv:2209.06049",
"arxiv:2112.14731",
"arxiv:1911.05405",
"arxiv:2105.13562",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-05-05T06:53:03Z |
---
language: en
pipeline_tag: fill-mask
tags:
- legal
license: mit
---
### InLegalBERT
Model and tokenizer files for the InLegalBERT model from the paper [Pre-training Transformers on Indian Legal Text](https://arxiv.org/abs/2209.06049).
### Training Data
For building the pre-training corpus of Indian legal text, we collected a large corpus of case documents from the Indian Supreme Court and many High Courts of India.
The court cases in our dataset range from 1950 to 2019, and belong to all legal domains, such as Civil, Criminal, Constitutional, and so on.
In total, our dataset contains around 5.4 million Indian legal documents (all in the English language).
The raw text corpus size is around 27 GB.
### Training Setup
This model is initialized with the [LEGAL-BERT-SC model](https://huggingface.co/nlpaueb/legal-bert-base-uncased) from the paper [LEGAL-BERT: The Muppets straight out of Law School](https://aclanthology.org/2020.findings-emnlp.261/). In our work, we refer to this model as LegalBERT, and our re-trained model as InLegalBERT.
We further train this model on our data for 300K steps on the Masked Language Modeling (MLM) and Next Sentence Prediction (NSP) tasks.
### Model Overview
This model uses a custom tokenizer with vocabulary adapted for the Indian Legal domain.
This model has the same configuration as the [bert-base-uncased model](https://huggingface.co/bert-base-uncased):
12 hidden layers, 768 hidden dimensionality, 12 attention heads, ~110M parameters.
### Usage
Using the model to get embeddings/representations for a piece of text
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("law-ai/CustomInLawBERT")
text = "Replace this string with yours"
encoded_input = tokenizer(text, return_tensors="pt")
model = AutoModel.from_pretrained("law-ai/InLegalBERT")
output = model(**encoded_input)
last_hidden_state = output.last_hidden_state
```
### Fine-tuning Results
We have fine-tuned all pre-trained models on 3 legal tasks with Indian datasets:
* Legal Statute Identification ([ILSI Dataset](https://arxiv.org/abs/2112.14731))[Multi-label Text Classification]: Identifying relevant statutes (law articles) based on the facts of a court case
* Semantic Segmentation ([ISS Dataset](https://arxiv.org/abs/1911.05405))[Sentence Tagging]: Segmenting the document into 7 functional parts (semantic segments) such as Facts, Arguments, etc.
* Court Judgment Prediction ([ILDC Dataset](https://arxiv.org/abs/2105.13562))[Binary Text Classification]: Predicting whether the claims/petitions of a court case will be accepted/rejected
### Citation
```
@inproceedings{paul-2022-pretraining,
url = {https://arxiv.org/abs/2209.06049},
author = {Paul, Shounak and Mandal, Arpan and Goyal, Pawan and Ghosh, Saptarshi},
title = {Pre-trained Language Models for the Legal Domain: A Case Study on Indian Law},
booktitle = {Proceedings of 19th International Conference on Artificial Intelligence and Law - ICAIL 2023}
year = {2023},
}
```
### About Us
We are a group of researchers from the Department of Computer Science and Technology, Indian Insitute of Technology, Kharagpur.
Our research interests are primarily ML and NLP applications for the legal domain, with a special focus on the challenges and oppurtunites for the Indian legal scenario.
We have, and are currently working on several legal tasks such as:
* named entity recognition, summarization of legal documents
* semantic segmentation of legal documents
* legal statute identification from facts, court judgment prediction
* legal document matching
You can find our publicly available codes and datasets [here](https://github.com/Law-AI).
|
MichelNivard/hexcoder
|
MichelNivard
| 2023-06-15T17:58:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"custom_code",
"dataset:bigcode/the-stack",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-15T08:10:59Z |
---
datasets:
- bigcode/the-stack
---
# hexcoder

This is a model that trains the base [santacoder model](https://huggingface.co/bigcode/santacoder) on all r code and rmarkdown code in "the stack". Training for 6 epochs on 512 token length snippets of r and rmarkdown code. While there isnt that much r code in the stack (far less then python or java...) this should at least give the model some r skills
Because I am on a limited compute budget, I trained the model on 512 token length pieces of R code, this means that for longer pieces of code it will do poorly. I will now proceed to fine tune the base model on 2048 context length pieces of r code in a parameter efficient way, for another 2 epochs (to ensure acceptable performance beyond 512 tokens).
Then I intend to instruction tune the model on all stackoverflow questions and anwsers with the tag 'r' in the 2011 to 2016 timeframe, presenting stackoverflow questions as <|human|> and the best answer as <|assistant|>. This will teach the model that it is expected to produce an answer to a user's question about 'r'.
The intended outcome is a reasonably adequate model which can answer basic r user questions, but more broadly an evaluaino of the data/sources and training needed to produce great open source code generating models for r.
|
sofia-todeschini/BioLinkBERT-LitCovid-v1.0
|
sofia-todeschini
| 2023-06-15T17:44:27Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-31T18:48:52Z |
---
license: mit
---
# BioLinkBERT-LitCovid-v1.0
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1098
- F1: 0.8992
- Roc Auc: 0.9330
- Accuracy: 0.7945
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.1172 | 1.0 | 3120 | 0.1098 | 0.8992 | 0.9330 | 0.7945 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
GyanShashwat/distilbert-base-uncased-finetuned-test-data
|
GyanShashwat
| 2023-06-15T17:39:11Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-15T15:20:01Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: GyanShashwat/distilbert-base-uncased-finetuned-test-data
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# GyanShashwat/distilbert-base-uncased-finetuned-test-data
This model is a fine-tuned version of [GyanShashwat/distilbert-base-uncased-finetuned-test-data](https://huggingface.co/GyanShashwat/distilbert-base-uncased-finetuned-test-data) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.0539
- Train End Logits Accuracy: 0.0
- Train Start Logits Accuracy: 0.0
- Epoch: 75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.01, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:-----:|
| 6.5953 | 0.0 | 0.0 | 0 |
| 6.0959 | 0.0 | 0.0 | 1 |
| 6.0750 | 0.0 | 0.1429 | 2 |
| 6.2449 | 0.0 | 0.0 | 3 |
| 6.6021 | 0.0 | 0.0 | 4 |
| 6.4264 | 0.0 | 0.0 | 5 |
| 6.6183 | 0.0 | 0.0 | 6 |
| 6.4572 | 0.0 | 0.0 | 7 |
| 6.2062 | 0.0 | 0.0 | 8 |
| 6.3750 | 0.0 | 0.0 | 9 |
| 6.4880 | 0.0 | 0.0 | 10 |
| 6.6889 | 0.0 | 0.0 | 11 |
| 6.0914 | 0.0 | 0.0 | 12 |
| 6.0446 | 0.0 | 0.0 | 13 |
| 6.8131 | 0.0 | 0.0 | 14 |
| 6.9439 | 0.0 | 0.0 | 15 |
| 6.0789 | 0.0 | 0.0 | 16 |
| 6.3060 | 0.0 | 0.0 | 17 |
| 6.1862 | 0.0 | 0.0 | 18 |
| 6.4202 | 0.0 | 0.0 | 19 |
| 6.0899 | 0.0 | 0.0 | 20 |
| 6.4460 | 0.0 | 0.0 | 21 |
| 6.0554 | 0.0 | 0.0 | 22 |
| 6.1655 | 0.0 | 0.0 | 23 |
| 6.3298 | 0.0 | 0.0 | 24 |
| 6.1062 | 0.0 | 0.0 | 25 |
| 6.2737 | 0.0 | 0.0 | 26 |
| 6.1412 | 0.0 | 0.0 | 27 |
| 6.2286 | 0.0 | 0.0 | 28 |
| 6.2041 | 0.0 | 0.0 | 29 |
| 6.7055 | 0.0 | 0.0 | 30 |
| 6.2596 | 0.0 | 0.0 | 31 |
| 6.7166 | 0.0 | 0.0 | 32 |
| 6.1891 | 0.0 | 0.0 | 33 |
| 6.1920 | 0.0 | 0.0 | 34 |
| 6.2608 | 0.0 | 0.0 | 35 |
| 6.0968 | 0.0 | 0.0 | 36 |
| 6.6072 | 0.0 | 0.0 | 37 |
| 6.2966 | 0.0 | 0.0 | 38 |
| 6.4528 | 0.0 | 0.0 | 39 |
| 6.5660 | 0.0 | 0.0 | 40 |
| 6.3345 | 0.0 | 0.0 | 41 |
| 6.1812 | 0.0 | 0.0 | 42 |
| 6.1986 | 0.0 | 0.0 | 43 |
| 6.2477 | 0.0 | 0.0 | 44 |
| 6.2783 | 0.0 | 0.0 | 45 |
| 6.7758 | 0.0 | 0.0 | 46 |
| 6.0984 | 0.0 | 0.0 | 47 |
| 6.1547 | 0.0 | 0.0 | 48 |
| 6.1153 | 0.0 | 0.0 | 49 |
| 6.2574 | 0.0 | 0.0 | 50 |
| 5.9857 | 0.0 | 0.0 | 51 |
| 6.1978 | 0.0 | 0.0 | 52 |
| 6.4674 | 0.0 | 0.0 | 53 |
| 6.0991 | 0.0 | 0.0 | 54 |
| 6.2534 | 0.0 | 0.0 | 55 |
| 6.1088 | 0.0 | 0.0 | 56 |
| 5.8161 | 0.0 | 0.0 | 57 |
| 5.9146 | 0.0 | 0.0 | 58 |
| 6.2400 | 0.0 | 0.0 | 59 |
| 6.2602 | 0.1429 | 0.0 | 60 |
| 6.0889 | 0.0 | 0.0 | 61 |
| 6.2283 | 0.0 | 0.0 | 62 |
| 6.4321 | 0.0 | 0.0 | 63 |
| 6.6588 | 0.0 | 0.0 | 64 |
| 6.2557 | 0.0 | 0.0 | 65 |
| 6.2958 | 0.0 | 0.0 | 66 |
| 6.1113 | 0.0 | 0.0 | 67 |
| 6.3594 | 0.0 | 0.0 | 68 |
| 5.9983 | 0.0 | 0.0 | 69 |
| 6.0230 | 0.0 | 0.1429 | 70 |
| 6.1085 | 0.0 | 0.0 | 71 |
| 6.3313 | 0.0 | 0.0 | 72 |
| 6.4739 | 0.0 | 0.0 | 73 |
| 6.1131 | 0.0 | 0.0 | 74 |
| 6.0539 | 0.0 | 0.0 | 75 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
sofia-todeschini/BioLinkBERT-LitCovid-v1.1
|
sofia-todeschini
| 2023-06-15T17:29:29Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T15:14:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: BioLinkBERT-LitCovid-v1.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioLinkBERT-LitCovid-v1.1
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1070
- F1: 0.9009
- Roc Auc: 0.9439
- Accuracy: 0.7915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.119 | 1.0 | 1560 | 0.1121 | 0.8949 | 0.9366 | 0.7857 |
| 0.0994 | 2.0 | 3120 | 0.1050 | 0.8999 | 0.9335 | 0.7934 |
| 0.0745 | 3.0 | 4680 | 0.1070 | 0.9009 | 0.9439 | 0.7915 |
| 0.0584 | 4.0 | 6240 | 0.1132 | 0.8986 | 0.9367 | 0.7900 |
| 0.0445 | 5.0 | 7800 | 0.1183 | 0.8993 | 0.9385 | 0.7886 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
terasys/angelchan
|
terasys
| 2023-06-15T17:28:57Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-06T13:53:53Z |
---
license: creativeml-openrail-m
---
|
Foxasdf/EnglishSpeechToText
|
Foxasdf
| 2023-06-15T17:23:55Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-02-21T18:39:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: EnglishSpeechToText
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EnglishSpeechToText
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 8
- seed: 41
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
nisaar/falcon7b-Indian_Lawyer
|
nisaar
| 2023-06-15T17:19:35Z | 0 | 2 | null |
[
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-06-15T16:43:40Z |
---
language:
- en
Tags:
- fine-tuned
- legal
- Indian law
license: "apache-2.0"
metrics:
- perplexity
---
# Fine-Tuned Falcon 7B - Indian Law
This is a Falcon 7B model fine-tuned for question answering in the domain of Indian law. It has been trained to answer questions regarding various aspects of the Indian legal system, such as the Constitution, the roles of governmental positions, and more.
## Model Description
Falcon is a family of state-of-the-art language models created by the Technology Innovation Institute in Abu Dhabi. This version, Falcon 7B, has been fine-tuned to specialize in understanding and generating responses related to Indian law. The model was trained on a custom dataset composed of question-answer pairs about Indian law.
## How to use
You can use this model for generating responses. Here is how to do it:
```python
from transformers import pipeline
generator = pipeline('text-generation', model='path_to_your_model')
print(generator("<human>: What is the role of the Judiciary as per the Constitution of India?", max_length=100))
|
emresvd/u196
|
emresvd
| 2023-06-15T17:18:49Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2023-06-15T17:18:44Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
zihansyu/donut-base-sroie
|
zihansyu
| 2023-06-15T17:14:40Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-06-15T09:49:59Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 1.12.1+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
eason0203/ppo-LunarLander-v2
|
eason0203
| 2023-06-15T17:14:04Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T17:13:46Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 238.25 +/- 16.60
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jiayanli/my-awesome-setfit-model
|
jiayanli
| 2023-06-15T17:02:14Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-15T17:01:23Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# jiayanli/my-awesome-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("jiayanli/my-awesome-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
ajtamayoh/RE_NegREF_NSD_Nubes_Training_Test_dataset_roberta-base-biomedical-clinical-es_fine_tuned_v3
|
ajtamayoh
| 2023-06-15T16:09:37Z | 272 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-21T21:57:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: RE_NegREF_NSD_Nubes_Training_Test_dataset_roberta-base-biomedical-clinical-es_fine_tuned_v3
results: []
widget:
- text: "Desde entoces refiere que ha tomado varias tandas de antibioticos sin terminar de ceder la clínica (aún presenta cierta disuria y polaquiuria)."
- text: "Dato reseñable: en los primeros días a nuestro cuidado presentó de nuevo anemización (se adjuntan controles analíticos) que ha ido recuperando sin que haya precisado hemotransfusión (al alta de Had presenta Hgb de 10.2g/dl)."
- text: "Paciente con diagnóstico de ELA en abril de 2015 que presenta desde hace más de dos meses disfagia progresiva, para líquidos preferentemente, con dos neumonías por aspiración, por lo que se programa ingreso para colocación de sonda de gastrostomía, realizándose el día 31 de diciembre, sin complicaciones y tolerando posteriormente la dieta por gastrostomía."
- text: "El ecocardiograma doppler color no muestra patologia que justifique los síntomas y la paciente evoluciona completamente asintomática y estable."
- text: "Ultima deposición, esa mañana, de escasa cuantia, sin emisión de gases."
- text: "Al alta presentaba hemipresia derecha con actividad en hombro hasta 90º de AF, mano espástica en flexión sin actividad."
- text: "Bases pulmonares, cavidades pleurales, hígado, bazo, páncreas y suprarrenales sin hallazgos."
- text: "Paciente con los antecedentes reseñados que ingresa por cuadro de escasas horas de evolución consistente en exacerbacion de su temblor habitual, que parece haberse hecho generalizado y cuya descripcion es incapaz de precisar."
- text: "Resumen de Historia clinica: Paciente ingresado por incremento de la disnea a los esfuerzos, en la urgencia se detecta ACxFA antes no conocida."
- text: "Mujer con 5 meses de gestación que ingresa por cuadro grave debido a bloqueo AV completo, que parece haber ocurrido en la madrugada y cuya sintomatología es incapaz de precisar."
- text: "La función renal ha sido correcta en todo momento con progresiva normalización de los indicadores infecciosos (HRF, PCR, VSG). Tras completar antibioterapia endovenosa con vancomicina el 25 de junio, inició el 26 de junio linezolid 600mg cada 12horas, siguiendo las recomendaciones de la unidad de Enfermedades Infecciosas, siendo bien tolerado y sin toxicidad, realizándose seguimiento analítico."
- text: "Presentamos un caso de convulsiones asociadas al tratamiento con L-asparaginasa, sin evidencia de eventos cerebrovasculares hemorrágicos o trombóticos."
- text: "Paladar asimétrico con desviación de uvula a la derecha, hiperemico, no abombado."
- text: "fecha de nacimiento 11 07 2016 niega ocntacto reicnete covid masculino de 6 anos de edad quien e straido a revision por su padre por presntar cuadro de presentacion al estar en la tarde como alas 13:00hrs en un acto religioso elmenor presenta un eventipo de mareo, un vomito gastroalimentario y afectacion visual, refiere haber ingerido alimentos 3 a 4 mnhoras previos, poero muy baja ingesta de liquidos."
- text: "se presneto recuperacion al 100%, el menor se encuentra neuroiologicamnete integro sin focalizaciones glasgow de 15/15 sin focalizaciones, cerebelo integro, retso normal, preocrdio normodinamico con ruidos ritmicos d eintensidad y frec7uencia dnetro d elo normal, no soplos o agregados."
- text: "paciente quien presnta cuadro climnico posible asociado a baj ingest ad eliquidos y someterse a evento religioso por tiempo proerfongafdo, en este momento sin afectacion cñlicncia"
- text: "El hígado muestra tamaño, morfología y valores de atenuación normales sin que se evidencien lesiones focales."
- text: "Test de las fotos: Recuerdo libre 3; Recuerdo facilitado 3; Sin recuerdo 0."
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RE_NegREF_NSD_Nubes_Training_Test_dataset_roberta-base-biomedical-clinical-es_fine_tuned_v3
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on an adaptation of the NUBES dataset called NeRUBioS (For this model, uncertainty labels were not considered). Training and Testing Datasets have 13832 and 2765 samples, respectively. This is a result of the PhD dissertation of Antonio Tamayo.
It achieves the following results on the evaluation set:
- Loss: 0.3617
- Negref Precision: 0.5916
- Negref Recall: 0.6021
- Negref F1: 0.5968
- Neg Precision: 0.9531
- Neg Recall: 0.9698
- Neg F1: 0.9614
- Nsco Precision: 0.8976
- Nsco Recall: 0.9145
- Nsco F1: 0.9060
- Precision: 0.8598
- Recall: 0.8754
- F1: 0.8676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Negref Precision | Negref Recall | Negref F1 | Neg Precision | Neg Recall | Neg F1 | Nsco Precision | Nsco Recall | Nsco F1 | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------:|:-------------:|:----------:|:------:|:--------------:|:-----------:|:-------:|:---------:|:------:|:------:|
| 0.0026 | 1.0 | 1729 | 0.3442 | 0.5689 | 0.5639 | 0.5664 | 0.9602 | 0.9663 | 0.9632 | 0.8765 | 0.9017 | 0.8889 | 0.8512 | 0.8614 | 0.8563 |
| 0.0098 | 2.0 | 3458 | 0.2580 | 0.5198 | 0.5771 | 0.5470 | 0.9254 | 0.9761 | 0.9501 | 0.8796 | 0.9123 | 0.8957 | 0.8236 | 0.8722 | 0.8472 |
| 0.0172 | 3.0 | 5187 | 0.2335 | 0.5618 | 0.6344 | 0.5959 | 0.9524 | 0.9698 | 0.9610 | 0.8908 | 0.9070 | 0.8988 | 0.8449 | 0.8789 | 0.8616 |
| 0.0082 | 4.0 | 6916 | 0.2568 | 0.5819 | 0.6520 | 0.6150 | 0.9563 | 0.9670 | 0.9616 | 0.8896 | 0.9085 | 0.8990 | 0.8505 | 0.8818 | 0.8659 |
| 0.0054 | 5.0 | 8645 | 0.3267 | 0.5882 | 0.6123 | 0.6000 | 0.9601 | 0.9628 | 0.9614 | 0.9048 | 0.9062 | 0.9055 | 0.8628 | 0.8713 | 0.8670 |
| 0.0069 | 6.0 | 10374 | 0.3017 | 0.5559 | 0.6138 | 0.5834 | 0.9556 | 0.9677 | 0.9616 | 0.8945 | 0.9107 | 0.9025 | 0.8475 | 0.8754 | 0.8612 |
| 0.0035 | 7.0 | 12103 | 0.3325 | 0.5541 | 0.6241 | 0.5870 | 0.9448 | 0.9740 | 0.9592 | 0.8859 | 0.9107 | 0.8982 | 0.8392 | 0.8801 | 0.8591 |
| 0.0016 | 8.0 | 13832 | 0.3345 | 0.5851 | 0.6109 | 0.5977 | 0.9537 | 0.9691 | 0.9613 | 0.8981 | 0.9138 | 0.9059 | 0.8576 | 0.8766 | 0.8670 |
| 0.0031 | 9.0 | 15561 | 0.3414 | 0.5974 | 0.6035 | 0.6004 | 0.9575 | 0.9642 | 0.9608 | 0.9094 | 0.9107 | 0.9101 | 0.8671 | 0.8719 | 0.8695 |
| 0.0014 | 10.0 | 17290 | 0.3479 | 0.5977 | 0.6153 | 0.6064 | 0.9518 | 0.9698 | 0.9607 | 0.8901 | 0.9130 | 0.9014 | 0.8572 | 0.8774 | 0.8672 |
| 0.0005 | 11.0 | 19019 | 0.3542 | 0.5892 | 0.6065 | 0.5977 | 0.9524 | 0.9698 | 0.9610 | 0.8970 | 0.9153 | 0.9060 | 0.8583 | 0.8766 | 0.8673 |
| 0.0002 | 12.0 | 20748 | 0.3617 | 0.5916 | 0.6021 | 0.5968 | 0.9531 | 0.9698 | 0.9614 | 0.8976 | 0.9145 | 0.9060 | 0.8598 | 0.8754 | 0.8676 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
dnjdsxor21/roberta-korquad-wiki
|
dnjdsxor21
| 2023-06-15T16:06:33Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"ko",
"endpoints_compatible",
"region:us"
] | null | 2023-06-14T15:15:45Z |
---
language:
- ko
metrics:
- exact_match
- f1
---
### finetuned version from `klue/roberta-large` with qa data
data : korquad v1 + wiki
```python
config = AutoConfig.from_pretrained('dnjdsxor21/roberta-korquad-wiki')
RobertaModelForQuestionAnswering.from_pretrained('dnjdsxor21/roberta-korquad-wiki', config=config)
BertTokenizer.from_pretrained('dnjdsxor21/roberta-korquad-wiki')
```
|
hyeamykim/finetuning-sentiment-model-3000-samples
|
hyeamykim
| 2023-06-15T15:42:49Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T15:04:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8833333333333333
- name: F1
type: f1
value: 0.8844884488448845
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2944
- Accuracy: 0.8833
- F1: 0.8845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Ismhrn/ISMHRNIAI
|
Ismhrn
| 2023-06-15T15:37:44Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T15:32:21Z |
---
license: creativeml-openrail-m
---
|
Leukschrauber/ppo-LunarLander-v2
|
Leukschrauber
| 2023-06-15T15:36:22Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T15:36:08Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: MlpPolicy
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.80 +/- 19.06
name: mean_reward
verified: false
---
# **MlpPolicy** Agent playing **LunarLander-v2**
This is a trained model of a **MlpPolicy** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
raghvendramall/esm2_t30_150M_UR50D-crystallization-finetuned-localization
|
raghvendramall
| 2023-06-15T15:27:41Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"esm",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T13:16:59Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: esm2_t30_150M_UR50D-crystallization-finetuned-localization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esm2_t30_150M_UR50D-crystallization-finetuned-localization
This model is a fine-tuned version of [facebook/esm2_t30_150M_UR50D](https://huggingface.co/facebook/esm2_t30_150M_UR50D) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4161
- F1: 0.5994
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.3948 | 1.0 | 1065 | 0.3929 | 0.6516 |
| 0.2678 | 2.0 | 2130 | 0.4647 | 0.6237 |
| 0.1524 | 3.0 | 3195 | 0.6984 | 0.6356 |
| 0.072 | 4.0 | 4260 | 0.9448 | 0.6312 |
| 0.0341 | 5.0 | 5325 | 1.1157 | 0.6099 |
| 0.0145 | 6.0 | 6390 | 1.2051 | 0.6144 |
| 0.0079 | 7.0 | 7455 | 1.3259 | 0.6149 |
| 0.007 | 8.0 | 8520 | 1.3418 | 0.6008 |
| 0.0027 | 9.0 | 9585 | 1.3921 | 0.6001 |
| 0.0014 | 10.0 | 10650 | 1.4161 | 0.5994 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
JCTN/models
|
JCTN
| 2023-06-15T15:27:04Z | 0 | 2 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-11T15:13:40Z |
---
license: creativeml-openrail-m
---
|
kudeponay/CNARealLoRA
|
kudeponay
| 2023-06-15T15:26:50Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T15:26:23Z |
---
license: creativeml-openrail-m
---
|
203427as321/hnai_06152023_151129
|
203427as321
| 2023-06-15T15:22:49Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T15:11:51Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: hnai_06152023_151129
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hnai_06152023_151129
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
elbanhawy/bard_PDF_QA
|
elbanhawy
| 2023-06-15T15:22:26Z | 0 | 0 |
transformers
|
[
"transformers",
"license:openrail",
"endpoints_compatible",
"region:us"
] | null | 2023-06-15T15:16:59Z |
---
license: openrail
library_name: transformers
Model: AutoModelForQuestionAnswering
Pretrained Model: bard
Learning Rate: 0.0001
Batch Size: 32
Epochs: 10
---
|
BChevva/finetuning-sentiment-model-3000-samples
|
BChevva
| 2023-06-15T15:10:34Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-14T12:32:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
- name: F1
type: f1
value: 0.8717948717948718
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8631
- Accuracy: 0.8667
- F1: 0.8718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
panpannn/pitloraaa
|
panpannn
| 2023-06-15T15:10:03Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T15:08:25Z |
---
license: creativeml-openrail-m
---
|
RightProfit/ppo-LunarLander-v2
|
RightProfit
| 2023-06-15T15:08:13Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T15:07:30Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -144.90 +/- 42.62
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
panpannn/pitloraa
|
panpannn
| 2023-06-15T15:05:42Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T15:05:42Z |
---
license: creativeml-openrail-m
---
|
7sunshine/noniw
|
7sunshine
| 2023-06-15T14:38:40Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T14:37:16Z |
---
license: creativeml-openrail-m
---
|
TheBloke/starchat-beta-GGML
|
TheBloke
| 2023-06-15T14:30:49Z | 12 | 34 |
transformers
|
[
"transformers",
"starcoder",
"generated_from_trainer",
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-06-08T22:29:50Z |
---
inference: false
tags:
- generated_from_trainer
model-index:
- name: starchat-beta
results: []
license: bigcode-openrail-m
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# HuggingFaceH4's Starchat Beta GGML
These files are GGML format model files for [HuggingFaceH4's Starchat Beta](https://huggingface.co/HuggingFaceH4/starchat-beta).
Please note that these GGMLs are **not compatible with llama.cpp, or currently with text-generation-webui**. Please see below for a list of tools known to work with these model files.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/starchat-beta-GPTQ)
* [4, 5, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/starchat-beta-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/HuggingFaceH4/starchat-beta)
## Prompt template
```
<|system|> system message goes here <|end|>
<|user|> prompt goes here <|end|>
<|assistant|>
```
Example:
```
<|system|> Below is a conversation between a human user and a helpful AI coding assistant. <|end|>
<|user|> How do I sort a list in Python? <|end|>
<|assistant|>
```
## Live demo and API
[Matt Hoffner](https://huggingface.co/matthoffner) has created two Spaces for this model, using the GGML files provided in this repo:
* API: https://huggingface.co/spaces/matthoffner/starchat-ggml
* UI: https://huggingface.co/spaces/matthoffner/starchat-ui
<!-- compatibility_ggml start -->
## Compatibilty
These files are **not** compatible with llama.cpp.
Currently they can be used with:
* KoboldCpp, a powerful inference engine based on llama.cpp, with good UI: [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* The ctransformers Python library, which includes LangChain support: [ctransformers](https://github.com/marella/ctransformers)
* The GPT4All-UI which uses ctransformers: [GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [rustformers' llm](https://github.com/rustformers/llm)
* The example `starcoder` binary provided with [ggml](https://github.com/ggerganov/ggml)
As other options become available I will endeavour to update them here (do let me know in the Community tab if I've missed something!)
## Tutorial for using GPT4All-UI
* [Text tutorial, written by **Lucas3DCG**](https://huggingface.co/TheBloke/MPT-7B-Storywriter-GGML/discussions/2#6475d914e9b57ce0caa68888)
* [Video tutorial, by GPT4All-UI's author **ParisNeo**](https://www.youtube.com/watch?v=ds_U0TDzbzI)
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| starchat-beta.ggmlv3.q4_0.bin | q4_0 | 4 | 10.75 GB | 13.25 GB | Original llama.cpp quant method, 4-bit. |
| starchat-beta.ggmlv3.q4_1.bin | q4_1 | 4 | 11.92 GB | 14.42 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| starchat-beta.ggmlv3.q5_0.bin | q5_0 | 5 | 13.09 GB | 15.59 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| starchat-beta.ggmlv3.q5_1.bin | q5_1 | 5 | 14.26 GB | 16.76 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| starchat-beta.ggmlv3.q8_0.bin | q8_0 | 8 | 20.11 GB | 22.61 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: HuggingFaceH4's Starchat Beta
<img src="https://huggingface.co/HuggingFaceH4/starchat-beta/resolve/main/model_logo.png" alt="StarChat Beta Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for StarChat Beta
StarChat is a series of language models that are trained to act as helpful coding assistants. StarChat Beta is the second model in the series, and is a fine-tuned version of [StarCoderPlus](https://huggingface.co/bigcode/starcoderplus) that was trained on an ["uncensored"](https://erichartford.com/uncensored-models) variant of the [`openassistant-guanaco` dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco). We found that removing the in-built alignment of the OpenAssistant dataset boosted performance on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) and made the model more helpful at coding tasks. However, this means that model is likely to generate problematic text when prompted to do so and should only be used for educational and research purposes.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Model type:** A 16B parameter GPT-like model fine-tuned on an ["uncensored"](https://erichartford.com/uncensored-models) variant of the [`openassistant-guanaco` dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
- **Language(s) (NLP):** Primarily English and 80+ programming languages.
- **License:** BigCode Open RAIL-M v1
- **Finetuned from model:** [bigcode/starcoderplus](https://huggingface.co/bigcode/starcoderplus)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bigcode-project/starcoder
- **Demo:** https://huggingface.co/spaces/HuggingFaceH4/starchat-playground
## Intended uses & limitations
The model was fine-tuned on a variant of the [`OpenAssistant/oasst1`](https://huggingface.co/datasets/OpenAssistant/oasst1) dataset, which contains a diverse range of dialogues in over 35 languages. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/starchat-playground) to test its coding capabilities.
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="HuggingFaceH4/starchat-beta", torch_dtype=torch.bfloat16, device_map="auto")
prompt_template = "<|system|>\n<|end|>\n<|user|>\n{query}<|end|>\n<|assistant|>"
prompt = prompt_template.format(query="How do I sort a list in Python?")
# We use a special <|end|> token with ID 49155 to denote ends of a turn
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.2, top_k=50, top_p=0.95, eos_token_id=49155)
# You can sort a list in Python by using the sort() method. Here's an example:\n\n```\nnumbers = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]\nnumbers.sort()\nprint(numbers)\n```\n\nThis will sort the list in place and print the sorted list.
```
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
StarChat Alpha has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
Models trained primarily on code data will also have a more skewed demographic bias commensurate with the demographics of the GitHub community, for more on this see the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata) which is derived from The Stack.
Since the base model was pretrained on a large corpus of code, it may produce code snippets that are syntactically valid but semantically incorrect.
For example, it may produce code that does not compile or that produces incorrect results.
It may also produce code that is vulnerable to security exploits.
We have observed the model also has a tendency to produce false URLs which should be carefully inspected before clicking.
StarChat Alpha was fine-tuned from the base model [StarCoder Base](https://huggingface.co/bigcode/starcoderbase), please refer to its model card's [Limitations Section](https://huggingface.co/bigcode/starcoderbase#limitations) for relevant information.
In particular, the model was evaluated on some categories of gender biases, propensity for toxicity, and risk of suggesting code completions with known security flaws; these evaluations are reported in its [technical report](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view).
## Training and evaluation data
StarChat Beta is trained on an ["uncensored"](https://erichartford.com/uncensored-models) variant of the [`openassistant-guanaco` dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco). We applied the same [recipe](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered/blob/main/wizardlm_clean.py) used to filter the ShareGPT datasets behind the [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5321 | 0.98 | 15 | 1.2856 |
| 1.2071 | 1.97 | 30 | 1.2620 |
| 1.0162 | 2.95 | 45 | 1.2853 |
| 0.8484 | 4.0 | 61 | 1.3274 |
| 0.6981 | 4.98 | 76 | 1.3994 |
| 0.5668 | 5.9 | 90 | 1.4720 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@article{Tunstall2023starchat-alpha,
author = {Tunstall, Lewis and Lambert, Nathan and Rajani, Nazneen and Beeching, Edward and Le Scao, Teven and von Werra, Leandro and Han, Sheon and Schmid, Philipp and Rush, Alexander},
title = {Creating a Coding Assistant with StarCoder},
journal = {Hugging Face Blog},
year = {2023},
note = {https://huggingface.co/blog/starchat},
}
```
|
203427as321/hnai_06152023_140006
|
203427as321
| 2023-06-15T14:11:07Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T14:00:14Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: hnai_06152023_140006
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hnai_06152023_140006
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
karoldobiczek/sam-controlnet-fresh
|
karoldobiczek
| 2023-06-15T13:57:39Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-15T10:54:45Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-karoldobiczek/sam-controlnet-fresh
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
You can find some example images below.
prompt: Horses graze in front of a large building amid snow.

prompt: some horses grazing in front of a church

prompt: A dog watches an animal on the television.

prompt: A four engine jet transport airplane flying low.

prompt: The snow covered mountain looks over the small city.

prompt: Close up of a traffic light with three lights, the top illuminated red with a person image, the second down not illuminated, and the bottom on hanging down.

prompt: several men on a street corner repairing a street sign

prompt: An arrow on the sign points the way to the Oil City Restaurant drive thru window.

prompt: A tall two story gray house sitting in front of a street sign that readsd Nirvana Dr.

prompt: several women on a street corner repairing a street sign

|
gokuls/hBERTv1_new_pretrain_48_emb_com_wnli
|
gokuls
| 2023-06-15T13:55:37Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T13:51:32Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_new_pretrain_48_emb_com_wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
config: wnli
split: validation
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_emb_com_wnli
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6859
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8985 | 1.0 | 5 | 0.9144 | 0.4366 |
| 0.7419 | 2.0 | 10 | 0.7704 | 0.4366 |
| 0.7079 | 3.0 | 15 | 0.7121 | 0.4366 |
| 0.6978 | 4.0 | 20 | 0.6859 | 0.5634 |
| 0.7001 | 5.0 | 25 | 0.7479 | 0.4366 |
| 0.7268 | 6.0 | 30 | 0.6904 | 0.5634 |
| 0.7028 | 7.0 | 35 | 0.7271 | 0.4366 |
| 0.7096 | 8.0 | 40 | 0.6870 | 0.5634 |
| 0.6953 | 9.0 | 45 | 0.7185 | 0.4366 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/hBERTv1_new_pretrain_48_emb_com_stsb
|
gokuls
| 2023-06-15T13:51:17Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T13:04:46Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: hBERTv1_new_pretrain_48_emb_com_stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
config: stsb
split: validation
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.45996385438365645
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_emb_com_stsb
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9214
- Pearson: 0.4648
- Spearmanr: 0.4600
- Combined Score: 0.4624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 2.5817 | 1.0 | 45 | 2.6028 | 0.2027 | 0.1896 | 0.1962 |
| 2.1023 | 2.0 | 90 | 2.1596 | 0.2035 | 0.1938 | 0.1986 |
| 1.9567 | 3.0 | 135 | 2.3409 | 0.1855 | 0.1931 | 0.1893 |
| 1.7201 | 4.0 | 180 | 2.1790 | 0.2865 | 0.2934 | 0.2899 |
| 1.5153 | 5.0 | 225 | 2.1208 | 0.3381 | 0.3352 | 0.3367 |
| 1.2674 | 6.0 | 270 | 2.1224 | 0.3882 | 0.3898 | 0.3890 |
| 1.0115 | 7.0 | 315 | 2.2253 | 0.4304 | 0.4281 | 0.4293 |
| 0.7449 | 8.0 | 360 | 2.3235 | 0.4236 | 0.4323 | 0.4279 |
| 0.66 | 9.0 | 405 | 2.3617 | 0.4340 | 0.4351 | 0.4346 |
| 0.4678 | 10.0 | 450 | 2.0741 | 0.4300 | 0.4258 | 0.4279 |
| 0.4438 | 11.0 | 495 | 2.3816 | 0.4285 | 0.4294 | 0.4289 |
| 0.3192 | 12.0 | 540 | 2.1673 | 0.4580 | 0.4602 | 0.4591 |
| 0.2481 | 13.0 | 585 | 2.1544 | 0.4392 | 0.4357 | 0.4374 |
| 0.2296 | 14.0 | 630 | 2.0075 | 0.4603 | 0.4582 | 0.4593 |
| 0.1765 | 15.0 | 675 | 2.1395 | 0.4624 | 0.4617 | 0.4621 |
| 0.1533 | 16.0 | 720 | 2.2715 | 0.4512 | 0.4427 | 0.4469 |
| 0.1343 | 17.0 | 765 | 2.1726 | 0.4441 | 0.4417 | 0.4429 |
| 0.1373 | 18.0 | 810 | 2.0223 | 0.4532 | 0.4424 | 0.4478 |
| 0.1277 | 19.0 | 855 | 1.9992 | 0.4395 | 0.4299 | 0.4347 |
| 0.0968 | 20.0 | 900 | 2.1078 | 0.4620 | 0.4601 | 0.4610 |
| 0.084 | 21.0 | 945 | 2.0684 | 0.4627 | 0.4577 | 0.4602 |
| 0.0777 | 22.0 | 990 | 1.9214 | 0.4648 | 0.4600 | 0.4624 |
| 0.0572 | 23.0 | 1035 | 2.0636 | 0.4506 | 0.4422 | 0.4464 |
| 0.0615 | 24.0 | 1080 | 2.0404 | 0.4489 | 0.4388 | 0.4438 |
| 0.0516 | 25.0 | 1125 | 2.0599 | 0.4516 | 0.4435 | 0.4475 |
| 0.0501 | 26.0 | 1170 | 2.0359 | 0.4530 | 0.4489 | 0.4510 |
| 0.0515 | 27.0 | 1215 | 1.9571 | 0.4588 | 0.4508 | 0.4548 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Tommert25/robbertfinetuned0906
|
Tommert25
| 2023-06-15T13:47:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-09T13:42:31Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: robbertfinetuned0906
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robbertfinetuned0906
This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5859
- Precision: 0.7151
- Recall: 0.7079
- F1: 0.7115
- Accuracy: 0.9186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.046 | 1.0 | 580 | 0.5770 | 0.6912 | 0.6633 | 0.6769 | 0.9102 |
| 0.0405 | 2.0 | 1160 | 0.5704 | 0.6996 | 0.6835 | 0.6914 | 0.9133 |
| 0.0346 | 3.0 | 1740 | 0.5786 | 0.6951 | 0.7201 | 0.7074 | 0.9130 |
| 0.0242 | 4.0 | 2320 | 0.5453 | 0.7098 | 0.7216 | 0.7157 | 0.9186 |
| 0.0184 | 5.0 | 2900 | 0.6058 | 0.7118 | 0.7036 | 0.7077 | 0.9189 |
| 0.0087 | 6.0 | 3480 | 0.5859 | 0.7151 | 0.7079 | 0.7115 | 0.9186 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/hBERTv2_new_no_pretrain_mnli
|
gokuls
| 2023-06-15T13:35:26Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T12:22:06Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv2_new_no_pretrain_mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
config: mnli
split: validation_matched
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.3522172497965826
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_no_pretrain_mnli
This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0983
- Accuracy: 0.3522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1022 | 1.0 | 3068 | 1.0986 | 0.3182 |
| 1.0988 | 2.0 | 6136 | 1.0982 | 0.3545 |
| 1.0987 | 3.0 | 9204 | 1.0986 | 0.3274 |
| 1.0988 | 4.0 | 12272 | 1.0988 | 0.3182 |
| 1.0986 | 5.0 | 15340 | 1.0986 | 0.3274 |
| 1.0987 | 6.0 | 18408 | 1.0986 | 0.3182 |
| 1.0986 | 7.0 | 21476 | 1.0986 | 0.3182 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
edbeeching/falcon-7b-ift-rm-22
|
edbeeching
| 2023-06-15T13:34:47Z | 4 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"region:us"
] | null | 2023-06-15T13:34:45Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: falcon-7b-ift-rm-22
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-7b-ift-rm-22
This model is a fine-tuned version of [HuggingFaceH4/falcon-7b-ift](https://huggingface.co/HuggingFaceH4/falcon-7b-ift) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6089
- Accuracy: 0.6533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5935 | 1.0 | 2197 | 0.6089 | 0.6533 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
chencjiajy/ppo-Huggy
|
chencjiajy
| 2023-06-15T13:26:06Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-15T13:25:56Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: chencjiajy/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
SargeZT/velocipedeux
|
SargeZT
| 2023-06-15T13:19:29Z | 38 | 0 |
diffusers
|
[
"diffusers",
"en",
"license:bsd",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-15T13:02:57Z |
---
license: bsd
language:
- en
---
# Model Card for Velocipedeux
A Stable Diffusion 1.5 model finetuned with v-prediction, zero terminal SNR, and trailing timesteps using a diverse dataset.
## Model Details
### Model Description
This model is a finetune of Stable Diffusion 1.5 that implements Zero Terminal SNR scaling, V-Prediction, and the use of trailing timesteps during training.
This model is in active development and should not be considered final.
|
hangeol/standingdogprompt
|
hangeol
| 2023-06-15T13:19:01Z | 29 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-15T11:16:52Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - hangeol/standingdogprompt
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
dasilluc/KI
|
dasilluc
| 2023-06-15T13:15:26Z | 6 | 0 |
tf-keras
|
[
"tf-keras",
"mobilenet",
"image-classification",
"region:us"
] |
image-classification
| 2023-06-15T12:48:51Z |
---
pipeline_tag: image-classification
---
|
gokuls/hBERTv1_new_pretrain_48_emb_com_rte
|
gokuls
| 2023-06-15T13:04:29Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T12:56:32Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_new_pretrain_48_emb_com_rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
config: rte
split: validation
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.4729241877256318
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_emb_com_rte
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6935
- Accuracy: 0.4729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7535 | 1.0 | 20 | 0.6953 | 0.4729 |
| 0.7205 | 2.0 | 40 | 0.6935 | 0.4729 |
| 0.7032 | 3.0 | 60 | 0.6941 | 0.5271 |
| 0.6969 | 4.0 | 80 | 0.7111 | 0.4729 |
| 0.7173 | 5.0 | 100 | 0.7630 | 0.5090 |
| 0.6969 | 6.0 | 120 | 0.7185 | 0.4946 |
| 0.6389 | 7.0 | 140 | 0.8181 | 0.5307 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DucHaiten/DucHaitenJourney
|
DucHaiten
| 2023-06-15T12:58:48Z | 304 | 9 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"image-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-12T16:01:27Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
- image-to-image
- diffusers
license: creativeml-openrail-m
inference: true
---
DPM++ 2S a Karras cfg 10
will be better in large resolution 768x768, 512x512 will be poor quality
negative prompt:
illustration, painting, cartoons, sketch, (worst quality:2), (low quality:2), (normal quality:2), lowres, bad anatomy, bad hands, ((monochrome)), ((grayscale)), collapsed eyeshadow, multiple eyeblows, vaginas in breasts, (cropped), oversaturated, extra limb, missing limbs, deformed hands, long neck, long body, imperfect, (bad hands), signature, watermark, username, artist name, conjoined fingers, deformed fingers, ugly eyes, imperfect eyes, skewed eyes, unnatural face, unnatural body, error
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.