Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
null | null |
# Paragram Embeddings
Towards Universal Paraphrastic Sentence Embeddings (25 dimensions)
Read more:
* https://www.cs.cmu.edu/~jwieting/
* https://www.cs.cmu.edu/~jwieting/wieting2016ICLR.pdf
|
{"tags": ["glove", "gensim", "fse"]}
|
fse/paragram-25
| null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
# Paragram Embeddings
300 dimensional Paragram embeddings tuned on SimLex999 dataset
Read more:
* https://www.cs.cmu.edu/~jwieting/
|
{"tags": ["glove", "gensim", "fse"]}
|
fse/paragram-300-sl999
| null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
# Paragram Embeddings
300 dimensional Paragram embeddings tuned on WordSim353 dataset
Read more:
* https://www.cs.cmu.edu/~jwieting/
|
{"tags": ["glove", "gensim", "fse"]}
|
fse/paragram-300-ws353
| null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
# Paragram Embeddings
Pushing the Limits of Paraphrastic Sentence Embeddings with Millions of Machine Translations (300 dimensions)
Read more:
* https://www.cs.cmu.edu/~jwieting/
* https://www.cs.cmu.edu/~jwieting/wieting2017Millions.pdf
|
{"tags": ["glove", "gensim", "fse"]}
|
fse/paranmt-300
| null |
[
"glove",
"gensim",
"fse",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
# Word2Vec
Pre-trained vectors trained on a part of the Google News dataset (about 100 billion words). The model contains 300-dimensional vectors for 3 million words and phrases. The phrases were obtained using a simple data-driven approach described in 'Distributed Representations of Words and Phrases and their Compositionality'
Read more:
* https://code.google.com/archive/p/word2vec/
* https://arxiv.org/abs/1301.3781
* https://arxiv.org/abs/1310.4546
* https://www.microsoft.com/en-us/research/publication/linguistic-regularities-in-continuous-space-word-representations/?from=http%3A%2F%2Fresearch.microsoft.com%2Fpubs%2F189726%2Frvecs.pdf
|
{"tags": ["glove", "gensim", "fse"]}
|
fse/word2vec-google-news-300
| null |
[
"glove",
"gensim",
"fse",
"arxiv:1301.3781",
"arxiv:1310.4546",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null |
transformers
|
{}
|
fspanda/Electra-Medical-v1.5-discriminator
| null |
[
"transformers",
"pytorch",
"electra",
"pretraining",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
fspanda/Electra-Medical-v1.5-generator
| null |
[
"transformers",
"pytorch",
"electra",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null |
transformers
|
{}
|
fspanda/Electra-Medical-v790000-discriminator
| null |
[
"transformers",
"pytorch",
"electra",
"pretraining",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
fspanda/Electra-Medical-v790000-generator
| null |
[
"transformers",
"pytorch",
"electra",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
fspanda/Medical-Bio-BERT2
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null |
transformers
|
{}
|
fspanda/electra-medical-discriminator
| null |
[
"transformers",
"pytorch",
"electra",
"pretraining",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null |
transformers
|
{}
|
fspanda/electra-medical-small-discriminator
| null |
[
"transformers",
"pytorch",
"electra",
"pretraining",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
fspanda/electra-medical-small-generator
| null |
[
"transformers",
"pytorch",
"electra",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
#Bully Maguire demo bot
|
{"tags": ["conversational"]}
|
ftnvir/DialoGPT-medium-bullyMaguire
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
This model was trained by ftshijt using aishell3/tts1 recipe in <a href="https://github.com/espnet/espnet/">espnet</a>.
<p> </p>
<ul>
<li><strong>Python API</strong><pre><code class="language-python">See https://github.com/espnet/espnet_model_zoo</code></pre></li>
<li><strong>Evaluate in the recipe</strong><pre>
<code class="language-bash">
See ESPNet repo for how to use pre-trained models
</pre></li>
<li><strong>Config</strong><pre><code>config: conf/train.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_raw_phn_pypinyin_g2p_phone
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 500
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 5
grad_clip: 1.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 500
batch_size: 20
valid_batch_size: null
batch_bins: 3750000
valid_batch_bins: null
train_shape_file:
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/text_shape.phn
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/speech_shape
valid_shape_file:
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/valid/text_shape.phn
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 240000
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_no_dev/text
- text
- text
- - dump/raw/train_no_dev/wav.scp
- speech
- sound
- - dump/xvector/train_no_dev/xvector.scp
- spembs
- kaldi_ark
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/xvector/dev/xvector.scp
- spembs
- kaldi_ark
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-06
weight_decay: 0.0
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- ''
- d
- sh
- j
- i4
- zh
- l
- x
- e
- b
- g
- i1
- h
- q
- m
- u4
- t
- z
- ch
- i3
- i2
- f
- s
- n
- r
- ian4
- e4
- ong1
- en2
- ai4
- k
- ing2
- a1
- iou3
- uo3
- ao4
- u3
- ui4
- p
- e2
- an1
- eng2
- c
- in1
- ai2
- an4
- ian2
- ing1
- ai3
- ang4
- ao3
- ian1
- uo4
- ian3
- iao4
- ang1
- u2
- ü4
- u1
- a4
- eng1
- ing4
- üan2
- ie4
- en1
- iu4
- uei4
- ou4
- er4
- e1
- ei4
- an3
- ong2
- uo2
- ang3
- ou1
- ou3
- ong4
- eng4
- an2
- iang4
- a3
- iang1
- ia1
- iao1
- uan4
- ia4
- iu3
- ang2
- uo1
- ei3
- e3
- in4
- iang3
- ü1
- uan1
- en3
- iao3
- ie3
- ao1
- ai1
- ü2
- ing3
- er2
- ü3
- uan3
- üe4
- in3
- en
- ei2
- üe2
- ie2
- en4
- ua4
- in2
- iu2
- uan2
- a2
- ie1
- ou2
- ui1
- iang2
- ong3
- i
- uang3
- eng3
- ün4
- uang4
- uai4
- iong4
- v3
- iou2
- ui2
- un1
- üan4
- uang1
- ei1
- uang2
- o2
- a
- ao2
- iao2
- ui3
- un4
- o1
- ua2
- un2
- uen2
- iu1
- v4
- ua1
- uei1
- üan3
- ün1
- üe1
- ün2
- uen4
- uei3
- uei2
- un3
- iou4
- o4
- er3
- uen1
- iong3
- iou1
- ia3
- üan1
- ia2
- iong1
- üe3
- uen3
- ve4
- iong2
- uai2
- uai1
- ua3
- ün3
- er
- uai3
- ia
- o3
- v2
- o
- ueng1
- ei
- '2'
- ua
- io1
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: pypinyin_g2p_phone
feats_extract: fbank
feats_extract_conf:
n_fft: 2048
hop_length: 300
win_length: 1200
fs: 24000
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/feats_stats.npz
tts: tacotron2
tts_conf:
embed_dim: 512
elayers: 1
eunits: 512
econv_layers: 3
econv_chans: 512
econv_filts: 5
atype: location
adim: 512
aconv_chans: 32
aconv_filts: 15
cumulate_att_w: true
dlayers: 2
dunits: 1024
prenet_layers: 2
prenet_units: 256
postnet_layers: 5
postnet_chans: 512
postnet_filts: 5
output_activation: null
use_batch_norm: true
use_concate: true
use_residual: false
spk_embed_dim: 512
spk_embed_integration_type: add
use_gst: true
gst_heads: 4
gst_tokens: 16
dropout_rate: 0.5
zoneout_rate: 0.1
reduction_factor: 1
use_masking: true
bce_pos_weight: 10.0
use_guided_attn_loss: true
guided_attn_loss_sigma: 0.4
guided_attn_loss_lambda: 1.0
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: 0.10.2a1
distributed: false</code></pre></li>
</ul>
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["aishell3"], "inference": false}
|
ftshijt/ESPnet2_pretrained_model_ftshijt_aishell3_tts_train_raw_phn_pypinyin_g2p_phone_train.loss.best
| null |
[
"espnet",
"audio",
"text-to-speech",
"zh",
"dataset:aishell3",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-to-speech
|
espnet
|
This model was trained by ftshijt using thchs30/tts1 recipe in <a href="https://github.com/espnet/espnet/">espnet</a>.
<p> </p>
<ul>
<li><strong>Python API</strong><pre><code class="language-python">See https://github.com/espnet/espnet_model_zoo</code></pre></li>
<li><strong>Evaluate in the recipe</strong><pre>
<code class="language-bash">Please see ESPNet for how to use pre-trained model
</pre></li>
<li><strong>Config</strong><pre><code>config: conf/train.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_raw_phn_pypinyin_g2p_phone
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 500
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 5
grad_clip: 1.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 500
batch_size: 20
valid_batch_size: null
batch_bins: 3750000
valid_batch_bins: null
train_shape_file:
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/text_shape.phn
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/speech_shape
valid_shape_file:
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/valid/text_shape.phn
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/text
- text
- text
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/xvector/train/xvector.scp
- spembs
- kaldi_ark
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/xvector/dev/xvector.scp
- spembs
- kaldi_ark
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-06
weight_decay: 0.0
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- ''
- d
- sh
- j
- zh
- l
- i4
- x
- b
- g
- h
- e
- q
- t
- m
- ch
- i1
- z
- u4
- i2
- i3
- n
- f
- s
- r
- k
- c
- p
- ai4
- e4
- a1
- an4
- ian4
- ing2
- u3
- ian2
- ong1
- e2
- in1
- eng2
- ui4
- ao4
- u2
- iao4
- üan2
- en2
- an1
- u1
- ai2
- ao3
- ing4
- eng1
- iou3
- ü4
- uo4
- üe4
- ong2
- ian1
- ing1
- uo3
- ie4
- ang1
- uei4
- ang4
- an2
- a4
- ou4
- ei4
- uai4
- ie3
- ang3
- ong4
- ai3
- ü2
- uo2
- an3
- ang2
- ou3
- er2
- ou1
- uo1
- en1
- ia1
- ü3
- uan1
- in2
- iong4
- ian3
- iang3
- a3
- iang2
- ia4
- ü1
- uan4
- iao3
- iang4
- uen2
- iang1
- uan3
- ai1
- ie2
- ei3
- uan2
- uang2
- in4
- üe2
- ao1
- eng3
- iu4
- iao1
- er4
- iu2
- in3
- un1
- uang1
- eng4
- a2
- uang3
- en3
- uang4
- ong3
- ing3
- e3
- ei2
- ou2
- ao2
- i
- ün4
- uei2
- ua4
- iou4
- ui1
- ua1
- en4
- ün2
- iao2
- ie1
- iou2
- iu3
- ün1
- üan4
- en
- ei1
- o2
- un4
- ui3
- iu1
- üan3
- e1
- v3
- ua2
- ia2
- ui2
- un2
- o4
- un3
- er3
- ia3
- iong1
- uei3
- o1
- üe1
- üan1
- iong3
- v4
- iong2
- uen4
- uai2
- uei1
- iou1
- a
- ua3
- uen1
- o3
- ueng1
- uai1
- uen3
- üe3
- ou
- uai3
- ve4
- er
- ün3
- o
- ua
- ia
- ' l ='
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: pypinyin_g2p_phone
feats_extract: fbank
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
fs: 16000
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/feats_stats.npz
tts: tacotron2
tts_conf:
embed_dim: 512
elayers: 1
eunits: 512
econv_layers: 3
econv_chans: 512
econv_filts: 5
atype: location
adim: 512
aconv_chans: 32
aconv_filts: 15
cumulate_att_w: true
dlayers: 2
dunits: 1024
prenet_layers: 2
prenet_units: 256
postnet_layers: 5
postnet_chans: 512
postnet_filts: 5
output_activation: null
use_batch_norm: true
use_concate: true
use_residual: false
spk_embed_dim: 512
spk_embed_integration_type: add
use_gst: true
gst_heads: 4
gst_tokens: 16
dropout_rate: 0.5
zoneout_rate: 0.1
reduction_factor: 1
use_masking: true
bce_pos_weight: 10.0
use_guided_attn_loss: true
guided_attn_loss_sigma: 0.4
guided_attn_loss_lambda: 1.0
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: 0.10.2a1
distributed: false</code></pre></li>
</ul>
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["thchs30"], "inference": false}
|
ftshijt/ESPnet2_pretrained_model_ftshijt_thchs30_tts_train_raw_phn_pypinyin_g2p_phone_train.loss.best
| null |
[
"espnet",
"audio",
"text-to-speech",
"zh",
"dataset:thchs30",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
ftshijt/ftshijt_espnet2_asr_puebla_nahuatl_transfer_learning
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
fuliucansheng/adsplus
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
fuliucansheng/detection
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
fuliucansheng/detectron2
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
fuliucansheng/mass
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
fuliucansheng/unilm
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
https://vrip.unmsm.edu.pe/forum/profile/liexylezzy/
https://vrip.unmsm.edu.pe/forum/profile/ellindanatasya/
https://vrip.unmsm.edu.pe/forum/profile/oploscgv/
https://vrip.unmsm.edu.pe/forum/profile/Zackoplos/
https://vrip.unmsm.edu.pe/forum/profile/unholyzulk/
https://vrip.unmsm.edu.pe/forum/profile/aurorarezash/
|
{}
|
fullshowbox/DSADAWF
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
https://community.afpglobal.org/network/members/profile?UserKey=fb4fdcef-dde4-4258-a423-2159545d84c1
https://community.afpglobal.org/network/members/profile?UserKey=e6ccc088-b709-45ec-b61e-4d56088acbda
https://community.afpglobal.org/network/members/profile?UserKey=ba280059-0890-4510-81d0-a79522b75ac8
https://community.afpglobal.org/network/members/profile?UserKey=799ba769-6e99-4a6a-a173-4f1b817e978c
https://community.afpglobal.org/network/members/profile?UserKey=babb84d7-e91a-4972-b26a-51067c66d793
https://community.afpglobal.org/network/members/profile?UserKey=8e4656bc-8d0d-44e1-b280-e68a2ace9353
https://community.afpglobal.org/network/members/profile?UserKey=8e7b41a8-9bed-4cb0-9021-a164b0aa6dd3
https://community.afpglobal.org/network/members/profile?UserKey=e4f38596-d772-4fbe-9e93-9aef5618f26e
https://community.afpglobal.org/network/members/profile?UserKey=18221e49-74ba-4155-ac1e-6f184bfb2398
https://community.afpglobal.org/network/members/profile?UserKey=ef4391e8-03df-467f-bf3f-4a45087817eb
https://community.afpglobal.org/network/members/profile?UserKey=832774fd-a035-421a-8236-61cf45a7747d
https://community.afpglobal.org/network/members/profile?UserKey=9f05cd73-b75c-4820-b60a-5df6357b2af9
https://community.afpglobal.org/network/members/profile?UserKey=c1727992-5024-4321-b0c9-ecc6f51e6532
https://www.hybrid-analysis.com/sample/255948e335dd9f873d11bf0224f8d180cd097509d23d27506292c22443fa92b8
https://www.facebook.com/PS5Giveaways2021
https://cgvmovie.cookpad-blog.jp/articles/589986
https://myanimelist.net/blog.php?eid=850892
https://comicvine.gamespot.com/profile/full-tv-free/about-me/
https://pantip.com/topic/40658194
|
{}
|
fullshowbox/full-tv-free
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
https://volunteer.alz.org/network/members/profile?UserKey=f4774542-39b3-4cfd-8c21-7b834795f7d7
https://volunteer.alz.org/network/members/profile?UserKey=05a00b90-f854-45fb-9a3a-7420144d290c
https://volunteer.alz.org/network/members/profile?UserKey=45cceddd-29b9-4c6c-8612-e2a16aaa391a
https://volunteer.alz.org/network/members/profile?UserKey=ae3c28f9-72a3-4af5-bd50-3b2ea2c0d3a3
https://volunteer.alz.org/network/members/profile?UserKey=7ab8e28e-e31f-4906-ab06-84b9ea3a880f
https://volunteer.alz.org/network/members/profile?UserKey=1b31fc90-e18e-4ef6-81f0-5c0b55fb95a3
https://volunteer.alz.org/network/members/profile?UserKey=23971b11-04ad-4eb4-abc5-6e659c6b071c
123movies-watch-online-movie-full-free-2021
https://myanimelist.net/blog.php?eid=849353
https://comicvine.gamespot.com/profile/nacenetwork21/about-me/
https://pantip.com/topic/40639721
|
{}
|
fullshowbox/nacenetwork21
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
https://www.nace.org/network/members/profile?UserKey=461a690a-bff6-4e4c-be63-ea8e39264459
https://www.nace.org/network/members/profile?UserKey=b4a6a66a-fb8a-4f2b-8af9-04f003ad9d46
https://www.nace.org/network/members/profile?UserKey=24544ab2-551d-42aa-adbe-7a1c1d68fd9c
https://www.nace.org/network/members/profile?UserKey=3e8035d5-056a-482d-9010-9883e5990f4a
https://www.nace.org/network/members/profile?UserKey=d7241c69-28c4-4146-a077-a00cc2c9ccf5
https://www.nace.org/network/members/profile?UserKey=2c58c2fb-13a4-4e5a-b044-f467bb295d83
https://www.nace.org/network/members/profile?UserKey=dd8a290c-e53a-4b56-9a17-d35dbcb6b8bd
https://www.nace.org/network/members/profile?UserKey=0e96a1af-91f4-496a-af02-6d753a1bbded
|
{}
|
fullshowbox/networkprofile
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
https://ragbrai.com/groups/hd-movie-watch-french-exit-2021-full-movie-online-for-free/
https://ragbrai.com/groups/hd-movie-watch-nobody-2021-full-movie-online-for-free/
https://ragbrai.com/groups/hd-movie-watch-voyagers-2021-full-movie-online-for-free/
https://ragbrai.com/groups/hd-movie-watch-godzilla-vs-kong-2021-full-movie-online-for-free/
https://ragbrai.com/groups/hd-movie-watch-raya-and-the-last-dragon-2021-full-movie-online-for-free/
https://ragbrai.com/groups/hd-movie-watch-mortal-kombat-2021-full-movie-online-for-free/
https://ragbrai.com/groups/hd-movie-watch-the-father-2021-full-movie-online-for-free/
|
{}
|
fullshowbox/ragbrai
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
feature-extraction
|
transformers
|
# Funnel Transformer intermediate model (B6-6-6 without decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
**Note:** This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the `intermediate` model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/intermediate-base")
model = FunnelBaseModel.from_pretrained("funnel-transformer/intermediate-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/intermediate-base")
model = TFFunnelBaseModel.from_pretrained("funnel-transformer/intermediate-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia", "gigaword"]}
|
funnel-transformer/intermediate-base
| null |
[
"transformers",
"pytorch",
"tf",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
feature-extraction
|
transformers
|
# Funnel Transformer intermediate model (B6-6-6 with decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/intermediate")
model = FunneModel.from_pretrained("funnel-transformer/intermediate")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/intermediate")
model = TFFunnelModel.from_pretrained("funnel-transformer/intermediatesmall")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia", "gigaword"]}
|
funnel-transformer/intermediate
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
feature-extraction
|
transformers
|
# Funnel Transformer large model (B8-8-8 without decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
**Note:** This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the `large` model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/large-base")
model = FunnelBaseModel.from_pretrained("funnel-transformer/large-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/large-base")
model = TFFunnelBaseModel.from_pretrained("funnel-transformer/large-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia", "gigaword"]}
|
funnel-transformer/large-base
| null |
[
"transformers",
"pytorch",
"tf",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
feature-extraction
|
transformers
|
# Funnel Transformer large model (B8-8-8 with decoder)
Pretrained model on English language using a similar objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/large")
model = FunneModel.from_pretrained("funnel-transformer/large")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/large")
model = TFFunnelModel.from_pretrained("funnel-transformer/large")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia", "gigaword"]}
|
funnel-transformer/large
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
feature-extraction
|
transformers
|
# Funnel Transformer medium model (B6-3x2-3x2 without decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
**Note:** This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the `medium` model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/medium-base")
model = FunnelBaseModel.from_pretrained("funnel-transformer/medium-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/medium-base")
model = TFFunnelBaseModel.from_pretrained("funnel-transformer/medium-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia", "gigaword"]}
|
funnel-transformer/medium-base
| null |
[
"transformers",
"pytorch",
"tf",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
feature-extraction
|
transformers
|
# Funnel Transformer medium model (B6-3x2-3x2 with decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/medium")
model = FunneModel.from_pretrained("funnel-transformer/medium")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/medium")
model = TFFunnelModel.from_pretrained("funnel-transformer/medium")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia", "gigaword"]}
|
funnel-transformer/medium
| null |
[
"transformers",
"pytorch",
"tf",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
feature-extraction
|
transformers
|
# Funnel Transformer small model (B4-4-4 without decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
**Note:** This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the `small` model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/small-base")
model = FunnelBaseModel.from_pretrained("funnel-transformer/small-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/small-base")
model = TFFunnelBaseModel.from_pretrained("funnel-transformer/small-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia", "gigaword"]}
|
funnel-transformer/small-base
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
feature-extraction
|
transformers
|
# Funnel Transformer small model (B4-4-4 with decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/small")
model = FunneModel.from_pretrained("funnel-transformer/small")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/small")
model = TFFunnelModel.from_pretrained("funnel-transformer/small")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia", "gigaword"]}
|
funnel-transformer/small
| null |
[
"transformers",
"pytorch",
"tf",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
feature-extraction
|
transformers
|
# Funnel Transformer xlarge model (B10-10-10 without decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
**Note:** This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the `xlarge` model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/xlarge-base")
model = FunnelBaseModel.from_pretrained("funnel-transformer/xlarge-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/xlarge-base")
model = TFFunnelBaseModel.from_pretrained("funnel-transformer/xlarge-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia", "gigaword"]}
|
funnel-transformer/xlarge-base
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
feature-extraction
|
transformers
|
# Funnel Transformer xlarge model (B10-10-10 with decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/xlarge")
model = FunneModel.from_pretrained("funnel-transformer/xlarge")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/xlarge")
model = TFFunnelModel.from_pretrained("funnel-transformer/xlarge")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia", "gigaword"]}
|
funnel-transformer/xlarge
| null |
[
"transformers",
"pytorch",
"tf",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
funnyfunny/test_transfer
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
funwiththoughts/dummy-model
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
furkanbilgin/gpt2-eksisozluk
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
furuhata/f
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
furunkel/bert-base-StackOverflow-comments_2M
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-bbc-headline
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 167 | 2.2978 | 31.8313 | 10.3824 | 29.6182 | 29.4336 | 10.3153 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "t5-base-finetuned-bbc-headline", "results": []}]}
|
furyhawk/t5-base-finetuned-bbc-headline
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-bbc
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 334 | 0.1500 | 24.5024 | 21.4979 | 24.0227 | 24.0303 | 19.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "t5-base-finetuned-bbc", "results": []}]}
|
furyhawk/t5-base-finetuned-bbc
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-bbc-headline
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 167 | 3.6454 | 22.4311 | 5.9878 | 20.118 | 20.482 | 18.9009 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "t5-small-finetuned-bbc-headline", "results": []}]}
|
furyhawk/t5-small-finetuned-bbc-headline
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-bbc
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3238
- Rouge1: 21.2266
- Rouge2: 16.0927
- Rougel: 19.6785
- Rougelsum: 19.8849
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.4882 | 1.0 | 1001 | 0.3238 | 21.2266 | 16.0927 | 19.6785 | 19.8849 | 19.0 |
### Framework versions
- Transformers 4.12.0
- Pytorch 1.10.0
- Datasets 1.14.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "t5-small-finetuned-bbc", "results": []}]}
|
furyhawk/t5-small-finetuned-bbc
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 128 | 2.9003 | 19.4784 | 2.8529 | 14.7786 | 15.0614 | 18.9825 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["xsum"], "model-index": [{"name": "t5-small-finetuned-xsum", "results": []}]}
|
furyhawk/t5-small-finetuned-xsum
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
furyhawk/text_sum
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
fuyunhuayu/face
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
{}
|
fvlr/pegasus-xsum
| null |
[
"transformers",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
fwafawfwa/fawfwafawfwaf
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
fxr/DialoGPT-small-joshua
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
fyc132/lfs
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.0964 | 1.0 | 2346 | 7.0532 |
| 6.9055 | 2.0 | 4692 | 6.8710 |
| 6.8574 | 3.0 | 7038 | 6.8917 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-cased-wikitext2", "results": []}]}
|
fznmhmmd/bert-base-cased-wikitext2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8273
- Matthews Correlation: 0.5544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5256 | 1.0 | 535 | 0.5419 | 0.4248 |
| 0.3486 | 2.0 | 1070 | 0.5187 | 0.4999 |
| 0.2406 | 3.0 | 1605 | 0.6580 | 0.5054 |
| 0.1692 | 4.0 | 2140 | 0.7455 | 0.5403 |
| 0.1343 | 5.0 | 2675 | 0.8273 | 0.5544 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5543972545286807, "name": "Matthews Correlation"}]}]}]}
|
fznmhmmd/distilbert-base-uncased-finetuned-cola
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.5571 | 1.0 | 2249 | 6.4684 |
| 6.1921 | 2.0 | 4498 | 6.1984 |
| 6.0016 | 3.0 | 6747 | 6.1112 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "gpt2-wikitext2", "results": []}]}
|
fznmhmmd/gpt2-wikitext2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
fznmhmmd/my-new-shiny-tokenizer
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
g4brielvs/gaga
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null |
transformers
|
{}
|
g8a9/vit-geppetto-captioning
| null |
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
g9rant/wav2vec2-base-timit-demo-colab
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
g9rant/wav2vec2-large-xls-300m-en-please
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
g9rant/wav2vec2-large-xls-r-300m-en-colab
| null |
[
"tensorboard",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-es-demo
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - ES dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1788
- Wer: 1.0239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 0.02 | 100 | 6.6465 | 1.0 |
| No log | 0.04 | 200 | 3.0150 | 1.0 |
| No log | 0.05 | 300 | 2.8622 | 1.0003 |
| No log | 0.07 | 400 | 0.9506 | 0.9771 |
| 5.1598 | 0.09 | 500 | 0.4883 | 1.0009 |
| 5.1598 | 0.11 | 600 | 0.3893 | 1.0203 |
| 5.1598 | 0.13 | 700 | 0.3417 | 1.0283 |
| 5.1598 | 0.14 | 800 | 0.3352 | 1.0335 |
| 5.1598 | 0.16 | 900 | 0.2987 | 1.0168 |
| 0.3671 | 0.18 | 1000 | 0.2921 | 1.0159 |
| 0.3671 | 0.2 | 1100 | 0.2770 | 1.0096 |
| 0.3671 | 0.22 | 1200 | 0.2790 | 1.0398 |
| 0.3671 | 0.24 | 1300 | 0.2659 | 1.0190 |
| 0.3671 | 0.25 | 1400 | 0.2657 | 1.0528 |
| 0.289 | 0.27 | 1500 | 0.2556 | 1.0301 |
| 0.289 | 0.29 | 1600 | 0.2514 | 1.0193 |
| 0.289 | 0.31 | 1700 | 0.2708 | 1.0699 |
| 0.289 | 0.33 | 1800 | 0.2455 | 1.0723 |
| 0.289 | 0.34 | 1900 | 0.2456 | 1.0100 |
| 0.271 | 0.36 | 2000 | 0.2338 | 1.0533 |
| 0.271 | 0.38 | 2100 | 0.2479 | 1.0128 |
| 0.271 | 0.4 | 2200 | 0.2483 | 1.0386 |
| 0.271 | 0.42 | 2300 | 0.2436 | 1.0528 |
| 0.271 | 0.43 | 2400 | 0.2382 | 1.0476 |
| 0.2634 | 0.45 | 2500 | 0.2329 | 1.0680 |
| 0.2634 | 0.47 | 2600 | 0.2433 | 1.0581 |
| 0.2634 | 0.49 | 2700 | 0.2354 | 1.0641 |
| 0.2634 | 0.51 | 2800 | 0.2318 | 1.0504 |
| 0.2634 | 0.52 | 2900 | 0.2325 | 1.0500 |
| 0.2522 | 0.54 | 3000 | 0.2344 | 1.0380 |
| 0.2522 | 0.56 | 3100 | 0.2244 | 1.0663 |
| 0.2522 | 0.58 | 3200 | 0.2340 | 1.0647 |
| 0.2522 | 0.6 | 3300 | 0.2288 | 1.0538 |
| 0.2522 | 0.61 | 3400 | 0.2212 | 1.0614 |
| 0.2468 | 0.63 | 3500 | 0.2487 | 1.0557 |
| 0.2468 | 0.65 | 3600 | 0.2330 | 1.0510 |
| 0.2468 | 0.67 | 3700 | 0.2308 | 1.0506 |
| 0.2468 | 0.69 | 3800 | 0.2320 | 1.0451 |
| 0.2468 | 0.71 | 3900 | 0.2261 | 1.0701 |
| 0.2505 | 0.72 | 4000 | 0.2281 | 1.0713 |
| 0.2505 | 0.74 | 4100 | 0.2277 | 1.0741 |
| 0.2505 | 0.76 | 4200 | 0.2253 | 1.0814 |
| 0.2505 | 0.78 | 4300 | 0.2215 | 1.0437 |
| 0.2505 | 0.8 | 4400 | 0.2220 | 1.0557 |
| 0.2434 | 0.81 | 4500 | 0.2184 | 1.0533 |
| 0.2434 | 0.83 | 4600 | 0.2222 | 1.0819 |
| 0.2434 | 0.85 | 4700 | 0.2162 | 1.0238 |
| 0.2434 | 0.87 | 4800 | 0.2132 | 1.0457 |
| 0.2434 | 0.89 | 4900 | 0.2068 | 1.0611 |
| 0.2347 | 0.9 | 5000 | 0.2166 | 1.0332 |
| 0.2347 | 0.92 | 5100 | 0.2087 | 1.0433 |
| 0.2347 | 0.94 | 5200 | 0.2100 | 1.0292 |
| 0.2347 | 0.96 | 5300 | 0.2067 | 1.0734 |
| 0.2347 | 0.98 | 5400 | 0.2148 | 1.0279 |
| 0.2333 | 0.99 | 5500 | 0.2125 | 1.0277 |
| 0.2333 | 1.01 | 5600 | 0.2054 | 1.0453 |
| 0.2333 | 1.03 | 5700 | 0.2091 | 1.0557 |
| 0.2333 | 1.05 | 5800 | 0.2086 | 1.0239 |
| 0.2333 | 1.07 | 5900 | 0.2051 | 1.0645 |
| 0.2087 | 1.09 | 6000 | 0.2103 | 1.0240 |
| 0.2087 | 1.1 | 6100 | 0.2145 | 1.0197 |
| 0.2087 | 1.12 | 6200 | 0.2136 | 1.0248 |
| 0.2087 | 1.14 | 6300 | 0.2045 | 1.0443 |
| 0.2087 | 1.16 | 6400 | 0.2089 | 1.0397 |
| 0.2013 | 1.18 | 6500 | 0.2012 | 1.0654 |
| 0.2013 | 1.19 | 6600 | 0.2054 | 1.0414 |
| 0.2013 | 1.21 | 6700 | 0.2081 | 1.0632 |
| 0.2013 | 1.23 | 6800 | 0.2104 | 1.0190 |
| 0.2013 | 1.25 | 6900 | 0.2045 | 1.0813 |
| 0.2092 | 1.27 | 7000 | 0.2096 | 1.0751 |
| 0.2092 | 1.28 | 7100 | 0.2103 | 1.0328 |
| 0.2092 | 1.3 | 7200 | 0.2044 | 1.0011 |
| 0.2092 | 1.32 | 7300 | 0.2089 | 1.0260 |
| 0.2092 | 1.34 | 7400 | 0.2063 | 1.0551 |
| 0.2076 | 1.36 | 7500 | 0.2029 | 1.0075 |
| 0.2076 | 1.37 | 7600 | 0.2040 | 1.0528 |
| 0.2076 | 1.39 | 7700 | 0.2075 | 1.0398 |
| 0.2076 | 1.41 | 7800 | 0.2023 | 1.0231 |
| 0.2076 | 1.43 | 7900 | 0.2049 | 1.0318 |
| 0.2028 | 1.45 | 8000 | 0.2072 | 1.0763 |
| 0.2028 | 1.47 | 8100 | 0.2075 | 1.0762 |
| 0.2028 | 1.48 | 8200 | 0.2052 | 1.0838 |
| 0.2028 | 1.5 | 8300 | 0.2053 | 1.0407 |
| 0.2028 | 1.52 | 8400 | 0.2066 | 1.0266 |
| 0.2025 | 1.54 | 8500 | 0.2037 | 1.0628 |
| 0.2025 | 1.56 | 8600 | 0.2010 | 1.0351 |
| 0.2025 | 1.57 | 8700 | 0.1961 | 1.0812 |
| 0.2025 | 1.59 | 8800 | 0.1963 | 1.0868 |
| 0.2025 | 1.61 | 8900 | 0.2022 | 1.0710 |
| 0.1997 | 1.63 | 9000 | 0.2051 | 1.0764 |
| 0.1997 | 1.65 | 9100 | 0.1987 | 1.0581 |
| 0.1997 | 1.66 | 9200 | 0.2051 | 1.0611 |
| 0.1997 | 1.68 | 9300 | 0.1999 | 1.0808 |
| 0.1997 | 1.7 | 9400 | 0.1972 | 1.0703 |
| 0.1983 | 1.72 | 9500 | 0.1961 | 1.0584 |
| 0.1983 | 1.74 | 9600 | 0.2031 | 1.0938 |
| 0.1983 | 1.75 | 9700 | 0.2019 | 1.0891 |
| 0.1983 | 1.77 | 9800 | 0.2006 | 1.0542 |
| 0.1983 | 1.79 | 9900 | 0.1925 | 1.0627 |
| 0.1961 | 1.81 | 10000 | 0.1976 | 1.0751 |
| 0.1961 | 1.83 | 10100 | 0.2051 | 1.0611 |
| 0.1961 | 1.85 | 10200 | 0.2037 | 1.0656 |
| 0.1961 | 1.86 | 10300 | 0.2025 | 1.0291 |
| 0.1961 | 1.88 | 10400 | 0.1977 | 1.0525 |
| 0.2025 | 1.9 | 10500 | 0.2030 | 1.0670 |
| 0.2025 | 1.92 | 10600 | 0.1980 | 1.0765 |
| 0.2025 | 1.94 | 10700 | 0.1975 | 1.0254 |
| 0.2025 | 1.95 | 10800 | 0.1986 | 1.0636 |
| 0.2025 | 1.97 | 10900 | 0.1956 | 1.0352 |
| 0.2025 | 1.99 | 11000 | 0.1954 | 1.0265 |
| 0.2025 | 2.01 | 11100 | 0.1957 | 1.0752 |
| 0.2025 | 2.03 | 11200 | 0.1943 | 1.0784 |
| 0.2025 | 2.04 | 11300 | 0.1898 | 1.0341 |
| 0.2025 | 2.06 | 11400 | 0.1921 | 1.0301 |
| 0.1805 | 2.08 | 11500 | 0.1910 | 1.0230 |
| 0.1805 | 2.1 | 11600 | 0.1961 | 1.0203 |
| 0.1805 | 2.12 | 11700 | 0.1973 | 1.0776 |
| 0.1805 | 2.13 | 11800 | 0.1876 | 1.0788 |
| 0.1805 | 2.15 | 11900 | 0.1934 | 1.0251 |
| 0.177 | 2.17 | 12000 | 0.1967 | 1.0340 |
| 0.177 | 2.19 | 12100 | 0.1932 | 1.0131 |
| 0.177 | 2.21 | 12200 | 0.1926 | 1.0078 |
| 0.177 | 2.23 | 12300 | 0.1947 | 0.9991 |
| 0.177 | 2.24 | 12400 | 0.1914 | 1.0213 |
| 0.1782 | 2.26 | 12500 | 0.1962 | 0.9882 |
| 0.1782 | 2.28 | 12600 | 0.1960 | 1.0562 |
| 0.1782 | 2.3 | 12700 | 0.2006 | 1.0401 |
| 0.1782 | 2.32 | 12800 | 0.1950 | 1.0688 |
| 0.1782 | 2.33 | 12900 | 0.1920 | 1.0435 |
| 0.1796 | 2.35 | 13000 | 0.1926 | 1.0667 |
| 0.1796 | 2.37 | 13100 | 0.1949 | 1.0859 |
| 0.1796 | 2.39 | 13200 | 0.1932 | 1.0670 |
| 0.1796 | 2.41 | 13300 | 0.1882 | 1.0663 |
| 0.1796 | 2.42 | 13400 | 0.1877 | 1.0760 |
| 0.1775 | 2.44 | 13500 | 0.1893 | 1.0859 |
| 0.1775 | 2.46 | 13600 | 0.1936 | 1.0702 |
| 0.1775 | 2.48 | 13700 | 0.1871 | 1.0414 |
| 0.1775 | 2.5 | 13800 | 0.1917 | 1.0430 |
| 0.1775 | 2.51 | 13900 | 0.1922 | 1.0422 |
| 0.1778 | 2.53 | 14000 | 0.1875 | 1.0585 |
| 0.1778 | 2.55 | 14100 | 0.1876 | 1.0603 |
| 0.1778 | 2.57 | 14200 | 0.1888 | 1.0628 |
| 0.1778 | 2.59 | 14300 | 0.1948 | 1.0782 |
| 0.1778 | 2.6 | 14400 | 0.1942 | 1.0695 |
| 0.1784 | 2.62 | 14500 | 0.1842 | 1.0863 |
| 0.1784 | 2.64 | 14600 | 0.1850 | 1.0543 |
| 0.1784 | 2.66 | 14700 | 0.1824 | 1.0683 |
| 0.1784 | 2.68 | 14800 | 0.1888 | 1.0693 |
| 0.1784 | 2.7 | 14900 | 0.1871 | 1.0175 |
| 0.1753 | 2.71 | 15000 | 0.1889 | 1.0549 |
| 0.1753 | 2.73 | 15100 | 0.1865 | 1.0544 |
| 0.1753 | 2.75 | 15200 | 0.1918 | 1.0726 |
| 0.1753 | 2.77 | 15300 | 0.1964 | 1.0915 |
| 0.1753 | 2.79 | 15400 | 0.1900 | 1.0610 |
| 0.1768 | 2.8 | 15500 | 0.1894 | 1.0763 |
| 0.1768 | 2.82 | 15600 | 0.1882 | 1.0548 |
| 0.1768 | 2.84 | 15700 | 0.1861 | 1.0902 |
| 0.1768 | 2.86 | 15800 | 0.1860 | 1.0551 |
| 0.1768 | 2.88 | 15900 | 0.1879 | 1.0581 |
| 0.1761 | 2.89 | 16000 | 0.1899 | 1.0544 |
| 0.1761 | 2.91 | 16100 | 0.1860 | 1.0530 |
| 0.1761 | 2.93 | 16200 | 0.1894 | 1.0596 |
| 0.1761 | 2.95 | 16300 | 0.1835 | 1.0394 |
| 0.1761 | 2.97 | 16400 | 0.1852 | 1.0445 |
| 0.1754 | 2.98 | 16500 | 0.1847 | 1.0390 |
| 0.1754 | 3.0 | 16600 | 0.1828 | 1.0440 |
| 0.1754 | 3.02 | 16700 | 0.1869 | 1.0560 |
| 0.1754 | 3.04 | 16800 | 0.1882 | 1.0573 |
| 0.1754 | 3.06 | 16900 | 0.1912 | 1.0600 |
| 0.1592 | 3.08 | 17000 | 0.1921 | 1.0529 |
| 0.1592 | 3.09 | 17100 | 0.1881 | 1.0175 |
| 0.1592 | 3.11 | 17200 | 0.1891 | 1.0654 |
| 0.1592 | 3.13 | 17300 | 0.1889 | 1.0687 |
| 0.1592 | 3.15 | 17400 | 0.1916 | 1.0642 |
| 0.1556 | 3.17 | 17500 | 0.1850 | 1.0295 |
| 0.1556 | 3.18 | 17600 | 0.1875 | 1.0273 |
| 0.1556 | 3.2 | 17700 | 0.1894 | 1.0051 |
| 0.1556 | 3.22 | 17800 | 0.1870 | 1.0462 |
| 0.1556 | 3.24 | 17900 | 0.1831 | 1.0308 |
| 0.1557 | 3.26 | 18000 | 0.1878 | 1.0603 |
| 0.1557 | 3.27 | 18100 | 0.1850 | 1.0566 |
| 0.1557 | 3.29 | 18200 | 0.1843 | 1.0629 |
| 0.1557 | 3.31 | 18300 | 0.1886 | 1.0378 |
| 0.1557 | 3.33 | 18400 | 0.1892 | 1.0381 |
| 0.159 | 3.35 | 18500 | 0.1942 | 1.0519 |
| 0.159 | 3.36 | 18600 | 0.1829 | 1.0622 |
| 0.159 | 3.38 | 18700 | 0.1894 | 1.0557 |
| 0.159 | 3.4 | 18800 | 0.1895 | 1.0627 |
| 0.159 | 3.42 | 18900 | 0.1863 | 1.0362 |
| 0.1582 | 3.44 | 19000 | 0.1888 | 1.0491 |
| 0.1582 | 3.46 | 19100 | 0.1854 | 1.0483 |
| 0.1582 | 3.47 | 19200 | 0.1797 | 0.9787 |
| 0.1582 | 3.49 | 19300 | 0.1785 | 1.0086 |
| 0.1582 | 3.51 | 19400 | 0.1797 | 0.9915 |
| 0.1507 | 3.53 | 19500 | 0.1873 | 1.0266 |
| 0.1507 | 3.55 | 19600 | 0.1838 | 1.0299 |
| 0.1507 | 3.56 | 19700 | 0.1817 | 1.0355 |
| 0.1507 | 3.58 | 19800 | 0.1819 | 1.0271 |
| 0.1507 | 3.6 | 19900 | 0.1883 | 1.0248 |
| 0.1601 | 3.62 | 20000 | 0.1823 | 1.0406 |
| 0.1601 | 3.64 | 20100 | 0.1801 | 1.0261 |
| 0.1601 | 3.65 | 20200 | 0.1783 | 1.0329 |
| 0.1601 | 3.67 | 20300 | 0.1857 | 1.0162 |
| 0.1601 | 3.69 | 20400 | 0.1814 | 1.0212 |
| 0.1552 | 3.71 | 20500 | 0.1837 | 1.0232 |
| 0.1552 | 3.73 | 20600 | 0.1843 | 1.0314 |
| 0.1552 | 3.74 | 20700 | 0.1842 | 1.0258 |
| 0.1552 | 3.76 | 20800 | 0.1821 | 1.0479 |
| 0.1552 | 3.78 | 20900 | 0.1864 | 1.0459 |
| 0.1576 | 3.8 | 21000 | 0.1831 | 1.0364 |
| 0.1576 | 3.82 | 21100 | 0.1852 | 1.0271 |
| 0.1576 | 3.83 | 21200 | 0.1865 | 1.0204 |
| 0.1576 | 3.85 | 21300 | 0.1794 | 1.0324 |
| 0.1576 | 3.87 | 21400 | 0.1826 | 1.0315 |
| 0.1585 | 3.89 | 21500 | 0.1824 | 1.0327 |
| 0.1585 | 3.91 | 21600 | 0.1838 | 1.0208 |
| 0.1585 | 3.93 | 21700 | 0.1850 | 1.0199 |
| 0.1585 | 3.94 | 21800 | 0.1841 | 1.0050 |
| 0.1585 | 3.96 | 21900 | 0.1783 | 1.0003 |
| 0.1572 | 3.98 | 22000 | 0.1787 | 1.0115 |
| 0.1572 | 4.0 | 22100 | 0.1810 | 1.0235 |
| 0.1572 | 4.02 | 22200 | 0.1763 | 1.0191 |
| 0.1572 | 4.03 | 22300 | 0.1764 | 1.0332 |
| 0.1572 | 4.05 | 22400 | 0.1794 | 1.0429 |
| 0.1406 | 4.07 | 22500 | 0.1905 | 1.0288 |
| 0.1406 | 4.09 | 22600 | 0.1776 | 1.0244 |
| 0.1406 | 4.11 | 22700 | 0.1782 | 1.0451 |
| 0.1406 | 4.12 | 22800 | 0.1771 | 1.0387 |
| 0.1406 | 4.14 | 22900 | 0.1788 | 1.0435 |
| 0.14 | 4.16 | 23000 | 0.1792 | 1.0421 |
| 0.14 | 4.18 | 23100 | 0.1841 | 1.0241 |
| 0.14 | 4.2 | 23200 | 0.1769 | 1.0546 |
| 0.14 | 4.21 | 23300 | 0.1815 | 1.0602 |
| 0.14 | 4.23 | 23400 | 0.1784 | 1.0369 |
| 0.1394 | 4.25 | 23500 | 0.1809 | 1.0406 |
| 0.1394 | 4.27 | 23600 | 0.1744 | 1.0133 |
| 0.1394 | 4.29 | 23700 | 0.1771 | 1.0214 |
| 0.1394 | 4.31 | 23800 | 0.1765 | 1.0064 |
| 0.1394 | 4.32 | 23900 | 0.1793 | 1.0200 |
| 0.14 | 4.34 | 24000 | 0.1776 | 1.0352 |
| 0.14 | 4.36 | 24100 | 0.1775 | 1.0294 |
| 0.14 | 4.38 | 24200 | 0.1763 | 1.0213 |
| 0.14 | 4.4 | 24300 | 0.1697 | 1.0302 |
| 0.14 | 4.41 | 24400 | 0.1771 | 1.0259 |
| 0.1408 | 4.43 | 24500 | 0.1747 | 1.0409 |
| 0.1408 | 4.45 | 24600 | 0.1769 | 1.0278 |
| 0.1408 | 4.47 | 24700 | 0.1767 | 1.0190 |
| 0.1408 | 4.49 | 24800 | 0.1745 | 1.0281 |
| 0.1408 | 4.5 | 24900 | 0.1738 | 1.0356 |
| 0.1391 | 4.52 | 25000 | 0.1781 | 1.0429 |
| 0.1391 | 4.54 | 25100 | 0.1784 | 1.0076 |
| 0.1391 | 4.56 | 25200 | 0.1771 | 1.0157 |
| 0.1391 | 4.58 | 25300 | 0.1758 | 1.0337 |
| 0.1391 | 4.59 | 25400 | 0.1758 | 1.0466 |
| 0.1398 | 4.61 | 25500 | 0.1724 | 1.0403 |
| 0.1398 | 4.63 | 25600 | 0.1765 | 1.0481 |
| 0.1398 | 4.65 | 25700 | 0.1757 | 1.0320 |
| 0.1398 | 4.67 | 25800 | 0.1814 | 1.0479 |
| 0.1398 | 4.69 | 25900 | 0.1713 | 1.0251 |
| 0.1427 | 4.7 | 26000 | 0.1735 | 1.0340 |
| 0.1427 | 4.72 | 26100 | 0.1765 | 1.0358 |
| 0.1427 | 4.74 | 26200 | 0.1731 | 1.0220 |
| 0.1427 | 4.76 | 26300 | 0.1769 | 1.0261 |
| 0.1427 | 4.78 | 26400 | 0.1747 | 1.0139 |
| 0.1424 | 4.79 | 26500 | 0.1791 | 1.0406 |
| 0.1424 | 4.81 | 26600 | 0.1735 | 1.0497 |
| 0.1424 | 4.83 | 26700 | 0.1710 | 1.0433 |
| 0.1424 | 4.85 | 26800 | 0.1771 | 1.0002 |
| 0.1424 | 4.87 | 26900 | 0.1748 | 1.0046 |
| 0.1419 | 4.88 | 27000 | 0.1794 | 1.0332 |
| 0.1419 | 4.9 | 27100 | 0.1772 | 1.0558 |
| 0.1419 | 4.92 | 27200 | 0.1757 | 1.0477 |
| 0.1419 | 4.94 | 27300 | 0.1735 | 1.0324 |
| 0.1419 | 4.96 | 27400 | 0.1758 | 1.0260 |
| 0.1433 | 4.97 | 27500 | 0.1767 | 1.0422 |
| 0.1433 | 4.99 | 27600 | 0.1695 | 1.0386 |
| 0.1433 | 5.01 | 27700 | 0.1763 | 1.0571 |
| 0.1433 | 5.03 | 27800 | 0.1743 | 1.0367 |
| 0.1433 | 5.05 | 27900 | 0.1804 | 1.0255 |
| 0.1306 | 5.07 | 28000 | 0.1803 | 1.0377 |
| 0.1306 | 5.08 | 28100 | 0.1750 | 1.0552 |
| 0.1306 | 5.1 | 28200 | 0.1743 | 1.0512 |
| 0.1306 | 5.12 | 28300 | 0.1777 | 1.0584 |
| 0.1306 | 5.14 | 28400 | 0.1726 | 1.0374 |
| 0.123 | 5.16 | 28500 | 0.1776 | 1.0439 |
| 0.123 | 5.17 | 28600 | 0.1759 | 1.0682 |
| 0.123 | 5.19 | 28700 | 0.1724 | 1.0511 |
| 0.123 | 5.21 | 28800 | 0.1677 | 1.0560 |
| 0.123 | 5.23 | 28900 | 0.1699 | 1.0421 |
| 0.1217 | 5.25 | 29000 | 0.1803 | 1.0370 |
| 0.1217 | 5.26 | 29100 | 0.1770 | 1.0474 |
| 0.1217 | 5.28 | 29200 | 0.1733 | 1.0332 |
| 0.1217 | 5.3 | 29300 | 0.1746 | 1.0158 |
| 0.1217 | 5.32 | 29400 | 0.1763 | 1.0341 |
| 0.1246 | 5.34 | 29500 | 0.1775 | 1.0348 |
| 0.1246 | 5.35 | 29600 | 0.1730 | 1.0492 |
| 0.1246 | 5.37 | 29700 | 0.1730 | 1.0503 |
| 0.1246 | 5.39 | 29800 | 0.1727 | 1.0437 |
| 0.1246 | 5.41 | 29900 | 0.1744 | 1.0539 |
| 0.127 | 5.43 | 30000 | 0.1748 | 1.0463 |
| 0.127 | 5.44 | 30100 | 0.1746 | 1.0555 |
| 0.127 | 5.46 | 30200 | 0.1810 | 1.0558 |
| 0.127 | 5.48 | 30300 | 0.1773 | 1.0407 |
| 0.127 | 5.5 | 30400 | 0.1722 | 1.0489 |
| 0.1276 | 5.52 | 30500 | 0.1720 | 1.0520 |
| 0.1276 | 5.54 | 30600 | 0.1777 | 1.0347 |
| 0.1276 | 5.55 | 30700 | 0.1685 | 1.0347 |
| 0.1276 | 5.57 | 30800 | 0.1659 | 1.0338 |
| 0.1276 | 5.59 | 30900 | 0.1756 | 1.0228 |
| 0.1246 | 5.61 | 31000 | 0.1717 | 1.0409 |
| 0.1246 | 5.63 | 31100 | 0.1764 | 1.0202 |
| 0.1246 | 5.64 | 31200 | 0.1693 | 1.0314 |
| 0.1246 | 5.66 | 31300 | 0.1731 | 1.0319 |
| 0.1246 | 5.68 | 31400 | 0.1688 | 1.0380 |
| 0.1271 | 5.7 | 31500 | 0.1671 | 1.0350 |
| 0.1271 | 5.72 | 31600 | 0.1676 | 1.0430 |
| 0.1271 | 5.73 | 31700 | 0.1656 | 1.0441 |
| 0.1271 | 5.75 | 31800 | 0.1664 | 1.0403 |
| 0.1271 | 5.77 | 31900 | 0.1691 | 1.0152 |
| 0.1259 | 5.79 | 32000 | 0.1702 | 1.0018 |
| 0.1259 | 5.81 | 32100 | 0.1664 | 1.0246 |
| 0.1259 | 5.82 | 32200 | 0.1737 | 1.0340 |
| 0.1259 | 5.84 | 32300 | 0.1742 | 1.0449 |
| 0.1259 | 5.86 | 32400 | 0.1707 | 1.0279 |
| 0.1273 | 5.88 | 32500 | 0.1697 | 1.0471 |
| 0.1273 | 5.9 | 32600 | 0.1668 | 1.0322 |
| 0.1273 | 5.92 | 32700 | 0.1706 | 1.0378 |
| 0.1273 | 5.93 | 32800 | 0.1704 | 1.0350 |
| 0.1273 | 5.95 | 32900 | 0.1725 | 1.0244 |
| 0.123 | 5.97 | 33000 | 0.1678 | 1.0447 |
| 0.123 | 5.99 | 33100 | 0.1681 | 1.0438 |
| 0.123 | 6.01 | 33200 | 0.1689 | 1.0297 |
| 0.123 | 6.02 | 33300 | 0.1690 | 1.0333 |
| 0.123 | 6.04 | 33400 | 0.1734 | 1.0296 |
| 0.1163 | 6.06 | 33500 | 0.1748 | 1.0307 |
| 0.1163 | 6.08 | 33600 | 0.1715 | 1.0123 |
| 0.1163 | 6.1 | 33700 | 0.1668 | 1.0117 |
| 0.1163 | 6.11 | 33800 | 0.1690 | 1.0230 |
| 0.1163 | 6.13 | 33900 | 0.1693 | 1.0166 |
| 0.1101 | 6.15 | 34000 | 0.1728 | 1.0162 |
| 0.1101 | 6.17 | 34100 | 0.1683 | 1.0107 |
| 0.1101 | 6.19 | 34200 | 0.1703 | 0.9814 |
| 0.1101 | 6.2 | 34300 | 0.1692 | 1.0007 |
| 0.1101 | 6.22 | 34400 | 0.1690 | 1.0000 |
| 0.1118 | 6.24 | 34500 | 0.1734 | 0.9972 |
| 0.1118 | 6.26 | 34600 | 0.1739 | 1.0096 |
| 0.1118 | 6.28 | 34700 | 0.1749 | 1.0047 |
| 0.1118 | 6.3 | 34800 | 0.1709 | 1.0111 |
| 0.1118 | 6.31 | 34900 | 0.1717 | 1.0179 |
| 0.1153 | 6.33 | 35000 | 0.1690 | 1.0155 |
| 0.1153 | 6.35 | 35100 | 0.1710 | 1.0144 |
| 0.1153 | 6.37 | 35200 | 0.1719 | 1.0030 |
| 0.1153 | 6.39 | 35300 | 0.1690 | 1.0272 |
| 0.1153 | 6.4 | 35400 | 0.1673 | 1.0103 |
| 0.1106 | 6.42 | 35500 | 0.1710 | 1.0222 |
| 0.1106 | 6.44 | 35600 | 0.1747 | 1.0173 |
| 0.1106 | 6.46 | 35700 | 0.1721 | 0.9933 |
| 0.1106 | 6.48 | 35800 | 0.1670 | 1.0184 |
| 0.1106 | 6.49 | 35900 | 0.1714 | 1.0122 |
| 0.1116 | 6.51 | 36000 | 0.1717 | 1.0035 |
| 0.1116 | 6.53 | 36100 | 0.1685 | 1.0099 |
| 0.1116 | 6.55 | 36200 | 0.1687 | 1.0288 |
| 0.1116 | 6.57 | 36300 | 0.1664 | 1.0314 |
| 0.1116 | 6.58 | 36400 | 0.1665 | 1.0264 |
| 0.1128 | 6.6 | 36500 | 0.1681 | 1.0420 |
| 0.1128 | 6.62 | 36600 | 0.1682 | 1.0409 |
| 0.1128 | 6.64 | 36700 | 0.1717 | 1.0271 |
| 0.1128 | 6.66 | 36800 | 0.1717 | 1.0166 |
| 0.1128 | 6.68 | 36900 | 0.1755 | 1.0175 |
| 0.1134 | 6.69 | 37000 | 0.1623 | 1.0185 |
| 0.1134 | 6.71 | 37100 | 0.1674 | 1.0302 |
| 0.1134 | 6.73 | 37200 | 0.1633 | 1.0325 |
| 0.1134 | 6.75 | 37300 | 0.1628 | 1.0228 |
| 0.1134 | 6.77 | 37400 | 0.1636 | 1.0243 |
| 0.1102 | 6.78 | 37500 | 0.1667 | 1.0282 |
| 0.1102 | 6.8 | 37600 | 0.1623 | 1.0212 |
| 0.1102 | 6.82 | 37700 | 0.1639 | 1.0140 |
| 0.1102 | 6.84 | 37800 | 0.1587 | 1.0258 |
| 0.1102 | 6.86 | 37900 | 0.1610 | 1.0087 |
| 0.1113 | 6.87 | 38000 | 0.1647 | 1.0199 |
| 0.1113 | 6.89 | 38100 | 0.1609 | 1.0054 |
| 0.1113 | 6.91 | 38200 | 0.1602 | 1.0145 |
| 0.1113 | 6.93 | 38300 | 0.1602 | 1.0144 |
| 0.1113 | 6.95 | 38400 | 0.1602 | 1.0375 |
| 0.1071 | 6.96 | 38500 | 0.1592 | 1.0259 |
| 0.1071 | 6.98 | 38600 | 0.1612 | 1.0236 |
| 0.1071 | 7.0 | 38700 | 0.1621 | 1.0277 |
| 0.1071 | 7.02 | 38800 | 0.1669 | 1.0367 |
| 0.1071 | 7.04 | 38900 | 0.1742 | 1.0484 |
| 0.1062 | 7.05 | 39000 | 0.1752 | 1.0302 |
| 0.1062 | 7.07 | 39100 | 0.1676 | 1.0244 |
| 0.1062 | 7.09 | 39200 | 0.1723 | 1.0300 |
| 0.1062 | 7.11 | 39300 | 0.1727 | 1.0294 |
| 0.1062 | 7.13 | 39400 | 0.1711 | 1.0255 |
| 0.1021 | 7.15 | 39500 | 0.1699 | 1.0471 |
| 0.1021 | 7.16 | 39600 | 0.1682 | 1.0426 |
| 0.1021 | 7.18 | 39700 | 0.1713 | 1.0233 |
| 0.1021 | 7.2 | 39800 | 0.1682 | 1.0259 |
| 0.1021 | 7.22 | 39900 | 0.1710 | 1.0162 |
| 0.103 | 7.24 | 40000 | 0.1725 | 1.0283 |
| 0.103 | 7.25 | 40100 | 0.1729 | 1.0264 |
| 0.103 | 7.27 | 40200 | 0.1665 | 1.0451 |
| 0.103 | 7.29 | 40300 | 0.1671 | 1.0386 |
| 0.103 | 7.31 | 40400 | 0.1671 | 1.0316 |
| 0.0981 | 7.33 | 40500 | 0.1708 | 1.0257 |
| 0.0981 | 7.34 | 40600 | 0.1642 | 1.0152 |
| 0.0981 | 7.36 | 40700 | 0.1707 | 1.0110 |
| 0.0981 | 7.38 | 40800 | 0.1675 | 1.0186 |
| 0.0981 | 7.4 | 40900 | 0.1702 | 1.0123 |
| 0.1005 | 7.42 | 41000 | 0.1699 | 1.0159 |
| 0.1005 | 7.43 | 41100 | 0.1703 | 1.0219 |
| 0.1005 | 7.45 | 41200 | 0.1707 | 1.0194 |
| 0.1005 | 7.47 | 41300 | 0.1644 | 1.0016 |
| 0.1005 | 7.49 | 41400 | 0.1716 | 0.9941 |
| 0.1021 | 7.51 | 41500 | 0.1670 | 1.0159 |
| 0.1021 | 7.53 | 41600 | 0.1667 | 1.0033 |
| 0.1021 | 7.54 | 41700 | 0.1667 | 1.0176 |
| 0.1021 | 7.56 | 41800 | 0.1679 | 1.0194 |
| 0.1021 | 7.58 | 41900 | 0.1632 | 1.0418 |
| 0.0963 | 7.6 | 42000 | 0.1712 | 1.0152 |
| 0.0963 | 7.62 | 42100 | 0.1632 | 1.0364 |
| 0.0963 | 7.63 | 42200 | 0.1702 | 1.0229 |
| 0.0963 | 7.65 | 42300 | 0.1655 | 1.0179 |
| 0.0963 | 7.67 | 42400 | 0.1698 | 1.0329 |
| 0.1014 | 7.69 | 42500 | 0.1691 | 1.0398 |
| 0.1014 | 7.71 | 42600 | 0.1638 | 1.0487 |
| 0.1014 | 7.72 | 42700 | 0.1617 | 1.0210 |
| 0.1014 | 7.74 | 42800 | 0.1648 | 1.0124 |
| 0.1014 | 7.76 | 42900 | 0.1608 | 1.0202 |
| 0.1008 | 7.78 | 43000 | 0.1611 | 1.0353 |
| 0.1008 | 7.8 | 43100 | 0.1633 | 1.0319 |
| 0.1008 | 7.81 | 43200 | 0.1640 | 1.0032 |
| 0.1008 | 7.83 | 43300 | 0.1589 | 0.9985 |
| 0.1008 | 7.85 | 43400 | 0.1630 | 0.9975 |
| 0.0988 | 7.87 | 43500 | 0.1604 | 1.0053 |
| 0.0988 | 7.89 | 43600 | 0.1687 | 1.0063 |
| 0.0988 | 7.91 | 43700 | 0.1619 | 1.0096 |
| 0.0988 | 7.92 | 43800 | 0.1565 | 0.9901 |
| 0.0988 | 7.94 | 43900 | 0.1619 | 0.9742 |
| 0.102 | 7.96 | 44000 | 0.1598 | 0.9593 |
| 0.102 | 7.98 | 44100 | 0.1635 | 0.9718 |
| 0.102 | 8.0 | 44200 | 0.1624 | 0.9903 |
| 0.102 | 8.01 | 44300 | 0.1605 | 0.9882 |
| 0.102 | 8.03 | 44400 | 0.1657 | 1.0128 |
| 0.0961 | 8.05 | 44500 | 0.1651 | 1.0155 |
| 0.0961 | 8.07 | 44600 | 0.1680 | 1.0194 |
| 0.0961 | 8.09 | 44700 | 0.1694 | 1.0112 |
| 0.0961 | 8.1 | 44800 | 0.1665 | 1.0073 |
| 0.0961 | 8.12 | 44900 | 0.1612 | 1.0200 |
| 0.0894 | 8.14 | 45000 | 0.1652 | 1.0337 |
| 0.0894 | 8.16 | 45100 | 0.1626 | 1.0086 |
| 0.0894 | 8.18 | 45200 | 0.1639 | 1.0083 |
| 0.0894 | 8.19 | 45300 | 0.1634 | 1.0223 |
| 0.0894 | 8.21 | 45400 | 0.1631 | 1.0339 |
| 0.0887 | 8.23 | 45500 | 0.1640 | 1.0311 |
| 0.0887 | 8.25 | 45600 | 0.1661 | 1.0264 |
| 0.0887 | 8.27 | 45700 | 0.1650 | 1.0315 |
| 0.0887 | 8.29 | 45800 | 0.1624 | 1.0390 |
| 0.0887 | 8.3 | 45900 | 0.1624 | 1.0350 |
| 0.0884 | 8.32 | 46000 | 0.1615 | 1.0318 |
| 0.0884 | 8.34 | 46100 | 0.1628 | 1.0410 |
| 0.0884 | 8.36 | 46200 | 0.1627 | 1.0429 |
| 0.0884 | 8.38 | 46300 | 0.1644 | 1.0320 |
| 0.0884 | 8.39 | 46400 | 0.1633 | 1.0177 |
| 0.0893 | 8.41 | 46500 | 0.1654 | 1.0189 |
| 0.0893 | 8.43 | 46600 | 0.1598 | 1.0154 |
| 0.0893 | 8.45 | 46700 | 0.1618 | 1.0250 |
| 0.0893 | 8.47 | 46800 | 0.1639 | 1.0402 |
| 0.0893 | 8.48 | 46900 | 0.1616 | 1.0336 |
| 0.0869 | 8.5 | 47000 | 0.1613 | 1.0296 |
| 0.0869 | 8.52 | 47100 | 0.1648 | 1.0568 |
| 0.0869 | 8.54 | 47200 | 0.1625 | 1.0256 |
| 0.0869 | 8.56 | 47300 | 0.1609 | 1.0390 |
| 0.0869 | 8.57 | 47400 | 0.1606 | 1.0450 |
| 0.0894 | 8.59 | 47500 | 0.1605 | 1.0445 |
| 0.0894 | 8.61 | 47600 | 0.1660 | 1.0402 |
| 0.0894 | 8.63 | 47700 | 0.1618 | 1.0444 |
| 0.0894 | 8.65 | 47800 | 0.1669 | 1.0333 |
| 0.0894 | 8.66 | 47900 | 0.1627 | 1.0364 |
| 0.0885 | 8.68 | 48000 | 0.1616 | 1.0334 |
| 0.0885 | 8.7 | 48100 | 0.1626 | 1.0564 |
| 0.0885 | 8.72 | 48200 | 0.1624 | 1.0396 |
| 0.0885 | 8.74 | 48300 | 0.1623 | 1.0396 |
| 0.0885 | 8.76 | 48400 | 0.1612 | 1.0112 |
| 0.0888 | 8.77 | 48500 | 0.1638 | 1.0292 |
| 0.0888 | 8.79 | 48600 | 0.1639 | 0.9988 |
| 0.0888 | 8.81 | 48700 | 0.1618 | 1.0127 |
| 0.0888 | 8.83 | 48800 | 0.1584 | 1.0042 |
| 0.0888 | 8.85 | 48900 | 0.1615 | 1.0041 |
| 0.0887 | 8.86 | 49000 | 0.1637 | 1.0269 |
| 0.0887 | 8.88 | 49100 | 0.1627 | 0.9989 |
| 0.0887 | 8.9 | 49200 | 0.1583 | 1.0104 |
| 0.0887 | 8.92 | 49300 | 0.1600 | 1.0214 |
| 0.0887 | 8.94 | 49400 | 0.1599 | 1.0126 |
| 0.0893 | 8.95 | 49500 | 0.1595 | 1.0516 |
| 0.0893 | 8.97 | 49600 | 0.1625 | 1.0464 |
| 0.0893 | 8.99 | 49700 | 0.1595 | 1.0361 |
| 0.0893 | 9.01 | 49800 | 0.1614 | 1.0469 |
| 0.0893 | 9.03 | 49900 | 0.1612 | 1.0304 |
| 0.0834 | 9.04 | 50000 | 0.1643 | 1.0335 |
| 0.0834 | 9.06 | 50100 | 0.1640 | 1.0175 |
| 0.0834 | 9.08 | 50200 | 0.1655 | 1.0264 |
| 0.0834 | 9.1 | 50300 | 0.1678 | 1.0243 |
| 0.0834 | 9.12 | 50400 | 0.1659 | 1.0145 |
| 0.079 | 9.14 | 50500 | 0.1644 | 1.0316 |
| 0.079 | 9.15 | 50600 | 0.1630 | 1.0326 |
| 0.079 | 9.17 | 50700 | 0.1634 | 1.0154 |
| 0.079 | 9.19 | 50800 | 0.1697 | 1.0095 |
| 0.079 | 9.21 | 50900 | 0.1678 | 1.0050 |
| 0.078 | 9.23 | 51000 | 0.1626 | 1.0159 |
| 0.078 | 9.24 | 51100 | 0.1666 | 1.0238 |
| 0.078 | 9.26 | 51200 | 0.1644 | 1.0244 |
| 0.078 | 9.28 | 51300 | 0.1655 | 1.0345 |
| 0.078 | 9.3 | 51400 | 0.1615 | 1.0237 |
| 0.0776 | 9.32 | 51500 | 0.1664 | 1.0180 |
| 0.0776 | 9.33 | 51600 | 0.1603 | 1.0208 |
| 0.0776 | 9.35 | 51700 | 0.1594 | 1.0230 |
| 0.0776 | 9.37 | 51800 | 0.1622 | 1.0201 |
| 0.0776 | 9.39 | 51900 | 0.1596 | 1.0039 |
| 0.0782 | 9.41 | 52000 | 0.1645 | 1.0204 |
| 0.0782 | 9.42 | 52100 | 0.1640 | 1.0318 |
| 0.0782 | 9.44 | 52200 | 0.1621 | 1.0290 |
| 0.0782 | 9.46 | 52300 | 0.1638 | 1.0318 |
| 0.0782 | 9.48 | 52400 | 0.1613 | 1.0217 |
| 0.0782 | 9.5 | 52500 | 0.1609 | 1.0261 |
| 0.0782 | 9.52 | 52600 | 0.1625 | 1.0101 |
| 0.0782 | 9.53 | 52700 | 0.1613 | 1.0058 |
| 0.0782 | 9.55 | 52800 | 0.1599 | 1.0068 |
| 0.0782 | 9.57 | 52900 | 0.1600 | 1.0110 |
| 0.0797 | 9.59 | 53000 | 0.1594 | 1.0171 |
| 0.0797 | 9.61 | 53100 | 0.1583 | 1.0124 |
| 0.0797 | 9.62 | 53200 | 0.1646 | 1.0093 |
| 0.0797 | 9.64 | 53300 | 0.1580 | 1.0201 |
| 0.0797 | 9.66 | 53400 | 0.1599 | 1.0207 |
| 0.0783 | 9.68 | 53500 | 0.1577 | 1.0226 |
| 0.0783 | 9.7 | 53600 | 0.1593 | 1.0160 |
| 0.0783 | 9.71 | 53700 | 0.1570 | 1.0173 |
| 0.0783 | 9.73 | 53800 | 0.1614 | 1.0299 |
| 0.0783 | 9.75 | 53900 | 0.1610 | 1.0184 |
| 0.0779 | 9.77 | 54000 | 0.1606 | 1.0173 |
| 0.0779 | 9.79 | 54100 | 0.1577 | 1.0032 |
| 0.0779 | 9.8 | 54200 | 0.1590 | 1.0070 |
| 0.0779 | 9.82 | 54300 | 0.1580 | 1.0257 |
| 0.0779 | 9.84 | 54400 | 0.1592 | 1.0108 |
| 0.0778 | 9.86 | 54500 | 0.1617 | 0.9907 |
| 0.0778 | 9.88 | 54600 | 0.1605 | 1.0189 |
| 0.0778 | 9.89 | 54700 | 0.1605 | 1.0177 |
| 0.0778 | 9.91 | 54800 | 0.1536 | 1.0275 |
| 0.0778 | 9.93 | 54900 | 0.1658 | 1.0282 |
| 0.0777 | 9.95 | 55000 | 0.1543 | 1.0385 |
| 0.0777 | 9.97 | 55100 | 0.1559 | 1.0375 |
| 0.0777 | 9.99 | 55200 | 0.1590 | 1.0215 |
| 0.0777 | 10.0 | 55300 | 0.1624 | 1.0242 |
| 0.0777 | 10.02 | 55400 | 0.1635 | 1.0244 |
| 0.0712 | 10.04 | 55500 | 0.1629 | 1.0298 |
| 0.0712 | 10.06 | 55600 | 0.1601 | 1.0299 |
| 0.0712 | 10.08 | 55700 | 0.1625 | 1.0117 |
| 0.0712 | 10.09 | 55800 | 0.1650 | 1.0233 |
| 0.0712 | 10.11 | 55900 | 0.1631 | 1.0061 |
| 0.0667 | 10.13 | 56000 | 0.1637 | 1.0226 |
| 0.0667 | 10.15 | 56100 | 0.1607 | 1.0042 |
| 0.0667 | 10.17 | 56200 | 0.1599 | 1.0117 |
| 0.0667 | 10.18 | 56300 | 0.1623 | 1.0246 |
| 0.0667 | 10.2 | 56400 | 0.1639 | 1.0294 |
| 0.0695 | 10.22 | 56500 | 0.1650 | 1.0232 |
| 0.0695 | 10.24 | 56600 | 0.1620 | 1.0289 |
| 0.0695 | 10.26 | 56700 | 0.1667 | 1.0209 |
| 0.0695 | 10.27 | 56800 | 0.1580 | 1.0163 |
| 0.0695 | 10.29 | 56900 | 0.1646 | 1.0293 |
| 0.0686 | 10.31 | 57000 | 0.1636 | 1.0106 |
| 0.0686 | 10.33 | 57100 | 0.1586 | 1.0044 |
| 0.0686 | 10.35 | 57200 | 0.1582 | 1.0213 |
| 0.0686 | 10.37 | 57300 | 0.1627 | 1.0151 |
| 0.0686 | 10.38 | 57400 | 0.1619 | 1.0248 |
| 0.0686 | 10.4 | 57500 | 0.1596 | 1.0098 |
| 0.0686 | 10.42 | 57600 | 0.1606 | 1.0031 |
| 0.0686 | 10.44 | 57700 | 0.1620 | 1.0046 |
| 0.0686 | 10.46 | 57800 | 0.1592 | 1.0018 |
| 0.0686 | 10.47 | 57900 | 0.1592 | 1.0058 |
| 0.0669 | 10.49 | 58000 | 0.1605 | 0.9961 |
| 0.0669 | 10.51 | 58100 | 0.1632 | 1.0102 |
| 0.0669 | 10.53 | 58200 | 0.1593 | 1.0061 |
| 0.0669 | 10.55 | 58300 | 0.1586 | 1.0091 |
| 0.0669 | 10.56 | 58400 | 0.1603 | 1.0085 |
| 0.068 | 10.58 | 58500 | 0.1579 | 1.0031 |
| 0.068 | 10.6 | 58600 | 0.1591 | 1.0021 |
| 0.068 | 10.62 | 58700 | 0.1590 | 1.0163 |
| 0.068 | 10.64 | 58800 | 0.1584 | 1.0045 |
| 0.068 | 10.65 | 58900 | 0.1594 | 1.0158 |
| 0.0693 | 10.67 | 59000 | 0.1568 | 1.0052 |
| 0.0693 | 10.69 | 59100 | 0.1581 | 0.9955 |
| 0.0693 | 10.71 | 59200 | 0.1622 | 0.9917 |
| 0.0693 | 10.73 | 59300 | 0.1580 | 1.0018 |
| 0.0693 | 10.75 | 59400 | 0.1601 | 1.0077 |
| 0.0699 | 10.76 | 59500 | 0.1605 | 0.9997 |
| 0.0699 | 10.78 | 59600 | 0.1585 | 1.0009 |
| 0.0699 | 10.8 | 59700 | 0.1541 | 1.0058 |
| 0.0699 | 10.82 | 59800 | 0.1583 | 1.0026 |
| 0.0699 | 10.84 | 59900 | 0.1592 | 0.9992 |
| 0.0671 | 10.85 | 60000 | 0.1590 | 1.0004 |
| 0.0671 | 10.87 | 60100 | 0.1585 | 1.0060 |
| 0.0671 | 10.89 | 60200 | 0.1579 | 1.0063 |
| 0.0671 | 10.91 | 60300 | 0.1582 | 0.9949 |
| 0.0671 | 10.93 | 60400 | 0.1562 | 1.0004 |
| 0.0661 | 10.94 | 60500 | 0.1560 | 0.9950 |
| 0.0661 | 10.96 | 60600 | 0.1564 | 0.9990 |
| 0.0661 | 10.98 | 60700 | 0.1552 | 0.9982 |
| 0.0661 | 11.0 | 60800 | 0.1596 | 1.0018 |
| 0.0661 | 11.02 | 60900 | 0.1618 | 0.9905 |
| 0.0634 | 11.03 | 61000 | 0.1652 | 0.9890 |
| 0.0634 | 11.05 | 61100 | 0.1649 | 0.9886 |
| 0.0634 | 11.07 | 61200 | 0.1668 | 0.9870 |
| 0.0634 | 11.09 | 61300 | 0.1663 | 0.9921 |
| 0.0634 | 11.11 | 61400 | 0.1650 | 0.9919 |
| 0.0587 | 11.13 | 61500 | 0.1674 | 0.9831 |
| 0.0587 | 11.14 | 61600 | 0.1633 | 0.9793 |
| 0.0587 | 11.16 | 61700 | 0.1665 | 0.9781 |
| 0.0587 | 11.18 | 61800 | 0.1642 | 0.9821 |
| 0.0587 | 11.2 | 61900 | 0.1638 | 0.9797 |
| 0.0581 | 11.22 | 62000 | 0.1628 | 0.9727 |
| 0.0581 | 11.23 | 62100 | 0.1661 | 0.9796 |
| 0.0581 | 11.25 | 62200 | 0.1641 | 0.9830 |
| 0.0581 | 11.27 | 62300 | 0.1601 | 0.9867 |
| 0.0581 | 11.29 | 62400 | 0.1626 | 0.9757 |
| 0.0584 | 11.31 | 62500 | 0.1632 | 1.0014 |
| 0.0584 | 11.32 | 62600 | 0.1626 | 1.0052 |
| 0.0584 | 11.34 | 62700 | 0.1586 | 1.0098 |
| 0.0584 | 11.36 | 62800 | 0.1597 | 1.0151 |
| 0.0584 | 11.38 | 62900 | 0.1624 | 1.0054 |
| 0.0589 | 11.4 | 63000 | 0.1618 | 1.0018 |
| 0.0589 | 11.41 | 63100 | 0.1635 | 1.0032 |
| 0.0589 | 11.43 | 63200 | 0.1654 | 1.0142 |
| 0.0589 | 11.45 | 63300 | 0.1646 | 1.0031 |
| 0.0589 | 11.47 | 63400 | 0.1618 | 1.0118 |
| 0.0579 | 11.49 | 63500 | 0.1634 | 1.0218 |
| 0.0579 | 11.51 | 63600 | 0.1616 | 1.0179 |
| 0.0579 | 11.52 | 63700 | 0.1603 | 1.0036 |
| 0.0579 | 11.54 | 63800 | 0.1610 | 1.0150 |
| 0.0579 | 11.56 | 63900 | 0.1605 | 1.0285 |
| 0.0572 | 11.58 | 64000 | 0.1621 | 1.0261 |
| 0.0572 | 11.6 | 64100 | 0.1625 | 1.0252 |
| 0.0572 | 11.61 | 64200 | 0.1677 | 1.0257 |
| 0.0572 | 11.63 | 64300 | 0.1656 | 1.0243 |
| 0.0572 | 11.65 | 64400 | 0.1669 | 1.0270 |
| 0.0592 | 11.67 | 64500 | 0.1605 | 1.0305 |
| 0.0592 | 11.69 | 64600 | 0.1633 | 1.0277 |
| 0.0592 | 11.7 | 64700 | 0.1606 | 1.0176 |
| 0.0592 | 11.72 | 64800 | 0.1618 | 1.0249 |
| 0.0592 | 11.74 | 64900 | 0.1609 | 1.0113 |
| 0.0595 | 11.76 | 65000 | 0.1609 | 1.0254 |
| 0.0595 | 11.78 | 65100 | 0.1662 | 1.0275 |
| 0.0595 | 11.79 | 65200 | 0.1652 | 1.0164 |
| 0.0595 | 11.81 | 65300 | 0.1638 | 1.0266 |
| 0.0595 | 11.83 | 65400 | 0.1589 | 1.0274 |
| 0.0588 | 11.85 | 65500 | 0.1607 | 1.0136 |
| 0.0588 | 11.87 | 65600 | 0.1592 | 1.0136 |
| 0.0588 | 11.88 | 65700 | 0.1581 | 1.0183 |
| 0.0588 | 11.9 | 65800 | 0.1587 | 1.0133 |
| 0.0588 | 11.92 | 65900 | 0.1596 | 1.0170 |
| 0.0558 | 11.94 | 66000 | 0.1590 | 1.0161 |
| 0.0558 | 11.96 | 66100 | 0.1597 | 1.0193 |
| 0.0558 | 11.98 | 66200 | 0.1590 | 1.0193 |
| 0.0558 | 11.99 | 66300 | 0.1608 | 1.0242 |
| 0.0558 | 12.01 | 66400 | 0.1642 | 1.0231 |
| 0.0555 | 12.03 | 66500 | 0.1679 | 1.0168 |
| 0.0555 | 12.05 | 66600 | 0.1674 | 1.0083 |
| 0.0555 | 12.07 | 66700 | 0.1658 | 1.0069 |
| 0.0555 | 12.08 | 66800 | 0.1661 | 1.0134 |
| 0.0555 | 12.1 | 66900 | 0.1682 | 1.0274 |
| 0.0508 | 12.12 | 67000 | 0.1702 | 1.0219 |
| 0.0508 | 12.14 | 67100 | 0.1694 | 1.0219 |
| 0.0508 | 12.16 | 67200 | 0.1667 | 1.0236 |
| 0.0508 | 12.17 | 67300 | 0.1672 | 1.0253 |
| 0.0508 | 12.19 | 67400 | 0.1640 | 1.0215 |
| 0.0513 | 12.21 | 67500 | 0.1649 | 1.0242 |
| 0.0513 | 12.23 | 67600 | 0.1687 | 1.0262 |
| 0.0513 | 12.25 | 67700 | 0.1655 | 1.0231 |
| 0.0513 | 12.26 | 67800 | 0.1692 | 1.0176 |
| 0.0513 | 12.28 | 67900 | 0.1675 | 1.0202 |
| 0.0519 | 12.3 | 68000 | 0.1644 | 1.0241 |
| 0.0519 | 12.32 | 68100 | 0.1651 | 1.0297 |
| 0.0519 | 12.34 | 68200 | 0.1661 | 1.0287 |
| 0.0519 | 12.36 | 68300 | 0.1665 | 1.0257 |
| 0.0519 | 12.37 | 68400 | 0.1685 | 1.0233 |
| 0.0522 | 12.39 | 68500 | 0.1636 | 1.0177 |
| 0.0522 | 12.41 | 68600 | 0.1709 | 1.0200 |
| 0.0522 | 12.43 | 68700 | 0.1684 | 1.0164 |
| 0.0522 | 12.45 | 68800 | 0.1666 | 1.0119 |
| 0.0522 | 12.46 | 68900 | 0.1683 | 1.0136 |
| 0.05 | 12.48 | 69000 | 0.1696 | 1.0127 |
| 0.05 | 12.5 | 69100 | 0.1708 | 1.0184 |
| 0.05 | 12.52 | 69200 | 0.1654 | 1.0282 |
| 0.05 | 12.54 | 69300 | 0.1700 | 1.0235 |
| 0.05 | 12.55 | 69400 | 0.1688 | 1.0257 |
| 0.0513 | 12.57 | 69500 | 0.1646 | 1.0274 |
| 0.0513 | 12.59 | 69600 | 0.1660 | 1.0247 |
| 0.0513 | 12.61 | 69700 | 0.1657 | 1.0188 |
| 0.0513 | 12.63 | 69800 | 0.1654 | 1.0087 |
| 0.0513 | 12.64 | 69900 | 0.1681 | 1.0146 |
| 0.0512 | 12.66 | 70000 | 0.1660 | 1.0185 |
| 0.0512 | 12.68 | 70100 | 0.1690 | 1.0214 |
| 0.0512 | 12.7 | 70200 | 0.1683 | 1.0160 |
| 0.0512 | 12.72 | 70300 | 0.1695 | 1.0198 |
| 0.0512 | 12.74 | 70400 | 0.1666 | 1.0193 |
| 0.0484 | 12.75 | 70500 | 0.1654 | 1.0142 |
| 0.0484 | 12.77 | 70600 | 0.1598 | 1.0154 |
| 0.0484 | 12.79 | 70700 | 0.1623 | 1.0139 |
| 0.0484 | 12.81 | 70800 | 0.1662 | 1.0180 |
| 0.0484 | 12.83 | 70900 | 0.1659 | 1.0232 |
| 0.0501 | 12.84 | 71000 | 0.1662 | 1.0202 |
| 0.0501 | 12.86 | 71100 | 0.1639 | 1.0161 |
| 0.0501 | 12.88 | 71200 | 0.1666 | 1.0151 |
| 0.0501 | 12.9 | 71300 | 0.1644 | 1.0129 |
| 0.0501 | 12.92 | 71400 | 0.1642 | 1.0171 |
| 0.0482 | 12.93 | 71500 | 0.1635 | 1.0162 |
| 0.0482 | 12.95 | 71600 | 0.1637 | 1.0186 |
| 0.0482 | 12.97 | 71700 | 0.1639 | 1.0142 |
| 0.0482 | 12.99 | 71800 | 0.1643 | 1.0122 |
| 0.0482 | 13.01 | 71900 | 0.1679 | 1.0156 |
| 0.0483 | 13.02 | 72000 | 0.1717 | 1.0224 |
| 0.0483 | 13.04 | 72100 | 0.1742 | 1.0229 |
| 0.0483 | 13.06 | 72200 | 0.1718 | 1.0237 |
| 0.0483 | 13.08 | 72300 | 0.1742 | 1.0266 |
| 0.0483 | 13.1 | 72400 | 0.1736 | 1.0257 |
| 0.0443 | 13.12 | 72500 | 0.1741 | 1.0275 |
| 0.0443 | 13.13 | 72600 | 0.1745 | 1.0325 |
| 0.0443 | 13.15 | 72700 | 0.1737 | 1.0296 |
| 0.0443 | 13.17 | 72800 | 0.1722 | 1.0303 |
| 0.0443 | 13.19 | 72900 | 0.1702 | 1.0305 |
| 0.0424 | 13.21 | 73000 | 0.1733 | 1.0241 |
| 0.0424 | 13.22 | 73100 | 0.1748 | 1.0243 |
| 0.0424 | 13.24 | 73200 | 0.1760 | 1.0231 |
| 0.0424 | 13.26 | 73300 | 0.1745 | 1.0241 |
| 0.0424 | 13.28 | 73400 | 0.1772 | 1.0217 |
| 0.0424 | 13.3 | 73500 | 0.1755 | 1.0206 |
| 0.0424 | 13.31 | 73600 | 0.1743 | 1.0242 |
| 0.0424 | 13.33 | 73700 | 0.1738 | 1.0208 |
| 0.0424 | 13.35 | 73800 | 0.1736 | 1.0249 |
| 0.0424 | 13.37 | 73900 | 0.1747 | 1.0271 |
| 0.0437 | 13.39 | 74000 | 0.1707 | 1.0241 |
| 0.0437 | 13.4 | 74100 | 0.1731 | 1.0269 |
| 0.0437 | 13.42 | 74200 | 0.1743 | 1.0290 |
| 0.0437 | 13.44 | 74300 | 0.1739 | 1.0266 |
| 0.0437 | 13.46 | 74400 | 0.1763 | 1.0246 |
| 0.0443 | 13.48 | 74500 | 0.1724 | 1.0209 |
| 0.0443 | 13.49 | 74600 | 0.1744 | 1.0244 |
| 0.0443 | 13.51 | 74700 | 0.1717 | 1.0232 |
| 0.0443 | 13.53 | 74800 | 0.1754 | 1.0217 |
| 0.0443 | 13.55 | 74900 | 0.1721 | 1.0234 |
| 0.0435 | 13.57 | 75000 | 0.1751 | 1.0197 |
| 0.0435 | 13.59 | 75100 | 0.1727 | 1.0285 |
| 0.0435 | 13.6 | 75200 | 0.1715 | 1.0221 |
| 0.0435 | 13.62 | 75300 | 0.1746 | 1.0247 |
| 0.0435 | 13.64 | 75400 | 0.1712 | 1.0231 |
| 0.0436 | 13.66 | 75500 | 0.1719 | 1.0228 |
| 0.0436 | 13.68 | 75600 | 0.1727 | 1.0197 |
| 0.0436 | 13.69 | 75700 | 0.1750 | 1.0252 |
| 0.0436 | 13.71 | 75800 | 0.1702 | 1.0241 |
| 0.0436 | 13.73 | 75900 | 0.1720 | 1.0250 |
| 0.0433 | 13.75 | 76000 | 0.1744 | 1.0210 |
| 0.0433 | 13.77 | 76100 | 0.1735 | 1.0211 |
| 0.0433 | 13.78 | 76200 | 0.1727 | 1.0205 |
| 0.0433 | 13.8 | 76300 | 0.1706 | 1.0218 |
| 0.0433 | 13.82 | 76400 | 0.1709 | 1.0238 |
| 0.0431 | 13.84 | 76500 | 0.1705 | 1.0197 |
| 0.0431 | 13.86 | 76600 | 0.1734 | 1.0223 |
| 0.0431 | 13.87 | 76700 | 0.1695 | 1.0250 |
| 0.0431 | 13.89 | 76800 | 0.1734 | 1.0232 |
| 0.0431 | 13.91 | 76900 | 0.1724 | 1.0219 |
| 0.041 | 13.93 | 77000 | 0.1706 | 1.0236 |
| 0.041 | 13.95 | 77100 | 0.1689 | 1.0220 |
| 0.041 | 13.97 | 77200 | 0.1738 | 1.0230 |
| 0.041 | 13.98 | 77300 | 0.1727 | 1.0254 |
| 0.041 | 14.0 | 77400 | 0.1721 | 1.0261 |
| 0.041 | 14.02 | 77500 | 0.1760 | 1.0261 |
| 0.041 | 14.04 | 77600 | 0.1772 | 1.0202 |
| 0.041 | 14.06 | 77700 | 0.1782 | 1.0202 |
| 0.041 | 14.07 | 77800 | 0.1777 | 1.0222 |
| 0.041 | 14.09 | 77900 | 0.1787 | 1.0203 |
| 0.0383 | 14.11 | 78000 | 0.1790 | 1.0236 |
| 0.0383 | 14.13 | 78100 | 0.1812 | 1.0245 |
| 0.0383 | 14.15 | 78200 | 0.1778 | 1.0224 |
| 0.0383 | 14.16 | 78300 | 0.1771 | 1.0231 |
| 0.0383 | 14.18 | 78400 | 0.1782 | 1.0242 |
| 0.0391 | 14.2 | 78500 | 0.1785 | 1.0262 |
| 0.0391 | 14.22 | 78600 | 0.1791 | 1.0261 |
| 0.0391 | 14.24 | 78700 | 0.1770 | 1.0254 |
| 0.0391 | 14.25 | 78800 | 0.1810 | 1.0257 |
| 0.0391 | 14.27 | 78900 | 0.1794 | 1.0241 |
| 0.0387 | 14.29 | 79000 | 0.1774 | 1.0256 |
| 0.0387 | 14.31 | 79100 | 0.1774 | 1.0236 |
| 0.0387 | 14.33 | 79200 | 0.1759 | 1.0222 |
| 0.0387 | 14.35 | 79300 | 0.1787 | 1.0237 |
| 0.0387 | 14.36 | 79400 | 0.1788 | 1.0227 |
| 0.0372 | 14.38 | 79500 | 0.1789 | 1.0232 |
| 0.0372 | 14.4 | 79600 | 0.1771 | 1.0254 |
| 0.0372 | 14.42 | 79700 | 0.1777 | 1.0244 |
| 0.0372 | 14.44 | 79800 | 0.1791 | 1.0225 |
| 0.0372 | 14.45 | 79900 | 0.1786 | 1.0237 |
| 0.0385 | 14.47 | 80000 | 0.1782 | 1.0243 |
| 0.0385 | 14.49 | 80100 | 0.1770 | 1.0236 |
| 0.0385 | 14.51 | 80200 | 0.1782 | 1.0240 |
| 0.0385 | 14.53 | 80300 | 0.1764 | 1.0243 |
| 0.0385 | 14.54 | 80400 | 0.1748 | 1.0248 |
| 0.039 | 14.56 | 80500 | 0.1758 | 1.0232 |
| 0.039 | 14.58 | 80600 | 0.1763 | 1.0246 |
| 0.039 | 14.6 | 80700 | 0.1770 | 1.0220 |
| 0.039 | 14.62 | 80800 | 0.1788 | 1.0225 |
| 0.039 | 14.63 | 80900 | 0.1781 | 1.0230 |
| 0.039 | 14.65 | 81000 | 0.1779 | 1.0230 |
| 0.039 | 14.67 | 81100 | 0.1755 | 1.0212 |
| 0.039 | 14.69 | 81200 | 0.1765 | 1.0226 |
| 0.039 | 14.71 | 81300 | 0.1787 | 1.0241 |
| 0.039 | 14.72 | 81400 | 0.1782 | 1.0250 |
| 0.0368 | 14.74 | 81500 | 0.1780 | 1.0248 |
| 0.0368 | 14.76 | 81600 | 0.1782 | 1.0242 |
| 0.0368 | 14.78 | 81700 | 0.1782 | 1.0242 |
| 0.0368 | 14.8 | 81800 | 0.1792 | 1.0241 |
| 0.0368 | 14.82 | 81900 | 0.1796 | 1.0238 |
| 0.0378 | 14.83 | 82000 | 0.1795 | 1.0236 |
| 0.0378 | 14.85 | 82100 | 0.1796 | 1.0239 |
| 0.0378 | 14.87 | 82200 | 0.1792 | 1.0236 |
| 0.0378 | 14.89 | 82300 | 0.1789 | 1.0239 |
| 0.0378 | 14.91 | 82400 | 0.1788 | 1.0238 |
| 0.0386 | 14.92 | 82500 | 0.1787 | 1.0239 |
| 0.0386 | 14.94 | 82600 | 0.1786 | 1.0236 |
| 0.0386 | 14.96 | 82700 | 0.1786 | 1.0237 |
| 0.0386 | 14.98 | 82800 | 0.1787 | 1.0239 |
| 0.0386 | 15.0 | 82900 | 0.1788 | 1.0238 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"language": ["es"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-common_voice-es-demo", "results": []}]}
|
gabrieljg/wav2vec2-common_voice-es-demo
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"es",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
gabrieljg/wav2vec2-large-xls-r-300m-spanish-colab
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
# Tagalog DialoGPT
This is an extension of the base Tagalog DialoGPT model (https://huggingface.co/gabtan99/dialogpt-tagalog-medium).
This model is trained on 52K original conversations and 52K synthetic conversations, where 10% of tokens in each utterance in the synthetic conversation are machine-generated tokens.
|
{"language": ["tl"], "tags": ["conversational", "tagalog", "filipino"]}
|
gabtan99/dialogpt-tagalog-medium-10
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"tagalog",
"filipino",
"tl",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# Tagalog DialoGPT
This is an extension of the base Tagalog DialoGPT model (https://huggingface.co/gabtan99/dialogpt-tagalog-medium).
This model is trained on 52K original conversations and 52K synthetic conversations, where 20% of tokens in each utterance in the synthetic conversation are machine-generated tokens.
|
{"language": ["tl"], "tags": ["conversational", "tagalog", "filipino"], "inference": false}
|
gabtan99/dialogpt-tagalog-medium-20
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"tagalog",
"filipino",
"tl",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# Tagalog DialoGPT
This is an extension of the base Tagalog DialoGPT model (https://huggingface.co/gabtan99/dialogpt-tagalog-medium).
This model is trained on 52K original conversations and 52K synthetic conversations, where 30% of tokens in each utterance in the synthetic conversation are machine-generated tokens.
|
{"language": ["tl"], "tags": ["conversational", "tagalog", "filipino"], "inference": false}
|
gabtan99/dialogpt-tagalog-medium-30
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"tagalog",
"filipino",
"tl",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# Tagalog DialoGPT
A DialoGPT-medium model fine-tuned on Tagalog conversational data scraped from the web. This model is an output of a research on RoBERTa-based data augmentation for low resource languages. This is the baseline model which did not use any synthetic data in training.
# Latest release: July 25, 2021
* The model is currently only able to respond based on the history of 3 previous utterances before being limited. This is a result of the scarce amount of Tagalog conversations in our dataset.
# Dataset
[PEx Conversations Dataset](https://huggingface.co/datasets/gabtan99/pex-conversations)
# Usage
Here is an example of using beam search for model inference.
```
for step in range(2):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# we limit the generation to 512 tokens, each utterance in training had a maximum of 128 tokens
chat_history_ids = model.generate(
bot_input_ids, max_length=512,
pad_token_id=tokenizer.eos_token_id,
num_beams=5,
no_repeat_ngram_size=3
)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
# Training Script
[Fine-tuning script adapted from Spanish DialoGPT](https://colab.research.google.com/github/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb)
# Research by
* [tyadrianpaule](https://huggingface.co/tyadrianpaule)
* [schuylerng](https://huggingface.co/schuylerng)
* [dcl127](https://huggingface.co/dcl127)
|
{"language": ["tl"], "tags": ["conversational", "tagalog", "filipino"], "datasets": ["gabtan99/pex-conversations"], "inference": false}
|
gabtan99/dialogpt-tagalog-medium
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"tagalog",
"filipino",
"tl",
"dataset:gabtan99/pex-conversations",
"autotrain_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
I am adding my first README in order to test the interface. How good is it really?
|
{}
|
gael1130/gael_first_model
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
This model is used in the paper **Generative Relation Linking for Question Answering over Knowledge Bases**. [ArXiv](https://arxiv.org/abs/2108.07337), [GitHub](https://github.com/IBM/kbqa-relation-linking)
## Citation
```bibtex
@inproceedings{rossiello-genrl-2021,
title={Generative relation linking for question answering over knowledge bases},
author={Rossiello, Gaetano and Mihindukulasooriya, Nandana and Abdelaziz, Ibrahim and Bornea, Mihaela and Gliozzo, Alfio and Naseem, Tahira and Kapanipathi, Pavan},
booktitle={International Semantic Web Conference},
pages={321--337},
year={2021},
organization={Springer},
url = "https://link.springer.com/chapter/10.1007/978-3-030-88361-4_19",
doi = "10.1007/978-3-030-88361-4_19"
}
```
|
{"license": "apache-2.0"}
|
gaetangate/bart-large_genrl_lcquad1
| null |
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"arxiv:2108.07337",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
This model is used in the paper **Generative Relation Linking for Question Answering over Knowledge Bases**. [ArXiv](https://arxiv.org/abs/2108.07337), [GitHub](https://github.com/IBM/kbqa-relation-linking)
## Citation
```bibtex
@inproceedings{rossiello-genrl-2021,
title={Generative relation linking for question answering over knowledge bases},
author={Rossiello, Gaetano and Mihindukulasooriya, Nandana and Abdelaziz, Ibrahim and Bornea, Mihaela and Gliozzo, Alfio and Naseem, Tahira and Kapanipathi, Pavan},
booktitle={International Semantic Web Conference},
pages={321--337},
year={2021},
organization={Springer},
url = "https://link.springer.com/chapter/10.1007/978-3-030-88361-4_19",
doi = "10.1007/978-3-030-88361-4_19"
}
```
|
{"license": "apache-2.0"}
|
gaetangate/bart-large_genrl_lcquad2
| null |
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"arxiv:2108.07337",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
This model is used in the paper **Generative Relation Linking for Question Answering over Knowledge Bases**. [ArXiv](https://arxiv.org/abs/2108.07337), [GitHub](https://github.com/IBM/kbqa-relation-linking)
## Citation
```bibtex
@inproceedings{rossiello-genrl-2021,
title={Generative relation linking for question answering over knowledge bases},
author={Rossiello, Gaetano and Mihindukulasooriya, Nandana and Abdelaziz, Ibrahim and Bornea, Mihaela and Gliozzo, Alfio and Naseem, Tahira and Kapanipathi, Pavan},
booktitle={International Semantic Web Conference},
pages={321--337},
year={2021},
organization={Springer},
url = "https://link.springer.com/chapter/10.1007/978-3-030-88361-4_19",
doi = "10.1007/978-3-030-88361-4_19"
}
```
|
{"license": "apache-2.0"}
|
gaetangate/bart-large_genrl_qald9
| null |
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"arxiv:2108.07337",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
This model is used in the paper **Generative Relation Linking for Question Answering over Knowledge Bases**. [ArXiv](https://arxiv.org/abs/2108.07337), [GitHub](https://github.com/IBM/kbqa-relation-linking)
## Citation
```bibtex
@inproceedings{rossiello-genrl-2021,
title={Generative relation linking for question answering over knowledge bases},
author={Rossiello, Gaetano and Mihindukulasooriya, Nandana and Abdelaziz, Ibrahim and Bornea, Mihaela and Gliozzo, Alfio and Naseem, Tahira and Kapanipathi, Pavan},
booktitle={International Semantic Web Conference},
pages={321--337},
year={2021},
organization={Springer},
url = "https://link.springer.com/chapter/10.1007/978-3-030-88361-4_19",
doi = "10.1007/978-3-030-88361-4_19"
}
```
|
{"license": "apache-2.0"}
|
gaetangate/bart-large_genrl_simpleq
| null |
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"arxiv:2108.07337",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
test 123
|
{}
|
gaga42gaga42/test
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# Generating Right Wing News Using GPT2
### I have built a custom model for it using data from Kaggle
Creating a new finetuned model using data from FOX news
### My model can be accessed at gagan3012/Fox-News-Generator
Check the [BenchmarkTest](https://github.com/gagan3012/Fox-News-Generator/blob/master/BenchmarkTest.ipynb) notebook for results
Find the model at [gagan3012/Fox-News-Generator](https://huggingface.co/gagan3012/Fox-News-Generator)
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gagan3012/Fox-News-Generator")
model = AutoModelWithLMHead.from_pretrained("gagan3012/Fox-News-Generator")
```
|
{}
|
gagan3012/Fox-News-Generator
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null |
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViTGPT2I2A
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the vizwiz dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 4
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.1528 | 0.17 | 1000 | 0.0869 |
| 0.0899 | 0.34 | 2000 | 0.0817 |
| 0.084 | 0.51 | 3000 | 0.0790 |
| 0.0814 | 0.68 | 4000 | 0.0773 |
| 0.0803 | 0.85 | 5000 | 0.0757 |
| 0.077 | 1.02 | 6000 | 0.0745 |
| 0.0739 | 1.19 | 7000 | 0.0740 |
| 0.0719 | 1.37 | 8000 | 0.0737 |
| 0.0717 | 1.54 | 9000 | 0.0730 |
| 0.0731 | 1.71 | 10000 | 0.0727 |
| 0.0708 | 1.88 | 11000 | 0.0720 |
| 0.0697 | 2.05 | 12000 | 0.0717 |
| 0.0655 | 2.22 | 13000 | 0.0719 |
| 0.0653 | 2.39 | 14000 | 0.0719 |
| 0.0657 | 2.56 | 15000 | 0.0712 |
| 0.0663 | 2.73 | 16000 | 0.0710 |
| 0.0654 | 2.9 | 17000 | 0.0708 |
| 0.0645 | 3.07 | 18000 | 0.0716 |
| 0.0616 | 3.24 | 19000 | 0.0712 |
| 0.0607 | 3.41 | 20000 | 0.0712 |
| 0.0611 | 3.58 | 21000 | 0.0711 |
| 0.0615 | 3.76 | 22000 | 0.0711 |
| 0.0614 | 3.93 | 23000 | 0.0710 |
| 0.0594 | 4.1 | 24000 | 0.0716 |
| 0.0587 | 4.27 | 25000 | 0.0715 |
| 0.0574 | 4.44 | 26000 | 0.0715 |
| 0.0579 | 4.61 | 27000 | 0.0715 |
| 0.0581 | 4.78 | 28000 | 0.0715 |
| 0.0579 | 4.95 | 29000 | 0.0715 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["image-captioning", "generated_from_trainer"], "model-index": [{"name": "ViTGPT2I2A", "results": []}]}
|
gagan3012/ViTGPT2I2A
| null |
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-captioning",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null |
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViTGPT2_VW
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 4
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.1256 | 0.03 | 1000 | 0.0928 |
| 0.0947 | 0.07 | 2000 | 0.0897 |
| 0.0889 | 0.1 | 3000 | 0.0859 |
| 0.0888 | 0.14 | 4000 | 0.0842 |
| 0.0866 | 0.17 | 5000 | 0.0831 |
| 0.0852 | 0.2 | 6000 | 0.0819 |
| 0.0833 | 0.24 | 7000 | 0.0810 |
| 0.0835 | 0.27 | 8000 | 0.0802 |
| 0.081 | 0.31 | 9000 | 0.0796 |
| 0.0803 | 0.34 | 10000 | 0.0789 |
| 0.0814 | 0.38 | 11000 | 0.0785 |
| 0.0799 | 0.41 | 12000 | 0.0780 |
| 0.0786 | 0.44 | 13000 | 0.0776 |
| 0.0796 | 0.48 | 14000 | 0.0771 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"tags": ["generated_from_trainer"], "model-index": [{"name": "ViTGPT2_VW", "results": []}]}
|
gagan3012/ViTGPT2_VW
| null |
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
image-to-text
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViTGPT2_vizwiz
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.1207 | 0.07 | 1000 | 0.0906 |
| 0.0916 | 0.14 | 2000 | 0.0861 |
| 0.0879 | 0.2 | 3000 | 0.0840 |
| 0.0856 | 0.27 | 4000 | 0.0822 |
| 0.0834 | 0.34 | 5000 | 0.0806 |
| 0.0817 | 0.41 | 6000 | 0.0795 |
| 0.0812 | 0.48 | 7000 | 0.0785 |
| 0.0808 | 0.55 | 8000 | 0.0779 |
| 0.0796 | 0.61 | 9000 | 0.0771 |
| 0.0786 | 0.68 | 10000 | 0.0767 |
| 0.0774 | 0.75 | 11000 | 0.0762 |
| 0.0772 | 0.82 | 12000 | 0.0758 |
| 0.0756 | 0.89 | 13000 | 0.0754 |
| 0.0759 | 0.96 | 14000 | 0.0750 |
| 0.0756 | 1.02 | 15000 | 0.0748 |
| 0.0726 | 1.09 | 16000 | 0.0745 |
| 0.0727 | 1.16 | 17000 | 0.0745 |
| 0.0715 | 1.23 | 18000 | 0.0742 |
| 0.0726 | 1.3 | 19000 | 0.0741 |
| 0.072 | 1.37 | 20000 | 0.0738 |
| 0.0723 | 1.43 | 21000 | 0.0735 |
| 0.0715 | 1.5 | 22000 | 0.0734 |
| 0.0724 | 1.57 | 23000 | 0.0732 |
| 0.0723 | 1.64 | 24000 | 0.0730 |
| 0.0718 | 1.71 | 25000 | 0.0729 |
| 0.07 | 1.78 | 26000 | 0.0728 |
| 0.0702 | 1.84 | 27000 | 0.0726 |
| 0.0704 | 1.91 | 28000 | 0.0725 |
| 0.0703 | 1.98 | 29000 | 0.0725 |
| 0.0686 | 2.05 | 30000 | 0.0726 |
| 0.0687 | 2.12 | 31000 | 0.0726 |
| 0.0688 | 2.19 | 32000 | 0.0724 |
| 0.0677 | 2.25 | 33000 | 0.0724 |
| 0.0665 | 2.32 | 34000 | 0.0725 |
| 0.0684 | 2.39 | 35000 | 0.0723 |
| 0.0678 | 2.46 | 36000 | 0.0722 |
| 0.0686 | 2.53 | 37000 | 0.0722 |
| 0.067 | 2.59 | 38000 | 0.0721 |
| 0.0669 | 2.66 | 39000 | 0.0721 |
| 0.0673 | 2.73 | 40000 | 0.0721 |
| 0.0673 | 2.8 | 41000 | 0.0720 |
| 0.0662 | 2.87 | 42000 | 0.0720 |
| 0.0681 | 2.94 | 43000 | 0.0719 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
{"tags": ["generated_from_trainer", "image-to-text"], "model-index": [{"name": "ViTGPT2_vizwiz", "results": []}]}
|
gagan3012/ViTGPT2_vizwiz
| null |
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"generated_from_trainer",
"image-to-text",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-tiny-finetuned-ner
This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1689
- Precision: 0.8083
- Recall: 0.8274
- F1: 0.8177
- Accuracy: 0.9598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0355 | 1.0 | 878 | 0.1692 | 0.8072 | 0.8248 | 0.8159 | 0.9594 |
| 0.0411 | 2.0 | 1756 | 0.1678 | 0.8101 | 0.8277 | 0.8188 | 0.9600 |
| 0.0386 | 3.0 | 2634 | 0.1697 | 0.8103 | 0.8269 | 0.8186 | 0.9599 |
| 0.0373 | 4.0 | 3512 | 0.1694 | 0.8106 | 0.8263 | 0.8183 | 0.9600 |
| 0.0383 | 5.0 | 4390 | 0.1689 | 0.8083 | 0.8274 | 0.8177 | 0.9598 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-tiny-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.8083060109289617, "name": "Precision"}, {"type": "recall", "value": 0.8273856136033113, "name": "Recall"}, {"type": "f1", "value": 0.8177345348001547, "name": "F1"}, {"type": "accuracy", "value": 0.9597597979252387, "name": "Accuracy"}]}]}]}
|
gagan3012/bert-tiny-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
gagan3012/debug_notebook
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0614
- Precision: 0.9274
- Recall: 0.9363
- F1: 0.9319
- Accuracy: 0.9840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2403 | 1.0 | 878 | 0.0701 | 0.9101 | 0.9202 | 0.9151 | 0.9805 |
| 0.0508 | 2.0 | 1756 | 0.0600 | 0.9220 | 0.9350 | 0.9285 | 0.9833 |
| 0.0301 | 3.0 | 2634 | 0.0614 | 0.9274 | 0.9363 | 0.9319 | 0.9840 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9274238227146815, "name": "Precision"}, {"type": "recall", "value": 0.9363463474661595, "name": "Recall"}, {"type": "f1", "value": 0.9318637274549098, "name": "F1"}, {"type": "accuracy", "value": 0.9839865283492462, "name": "Accuracy"}]}]}]}
|
gagan3012/distilbert-base-uncased-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
gagan3012/distilbert-base-uncased-finetuned-sst2
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
gagan3012/distilbert-fakenews-model-grover
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
# keytotext

Idea is to build a model which will take keywords as inputs and generate sentences as outputs.
### Keytotext is powered by Huggingface 🤗
[](https://pypi.org/project/keytotext/)
[](https://pepy.tech/project/keytotext)
[](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb)
[](https://share.streamlit.io/gagan3012/keytotext/UI/app.py)
## Model:
Keytotext is based on the Amazing T5 Model:
- `k2t`: [Model](https://huggingface.co/gagan3012/k2t)
- `k2t-tiny`: [Model](https://huggingface.co/gagan3012/k2t-tiny)
- `k2t-base`: [Model](https://huggingface.co/gagan3012/k2t-base)
Training Notebooks can be found in the [`Training Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Training%20Notebooks) Folder
## Usage:
Example usage: [](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb)
Example Notebooks can be found in the [`Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Examples) Folder
```
pip install keytotext
```

## UI:
UI: [](https://share.streamlit.io/gagan3012/keytotext/UI/app.py)
```
pip install streamlit-tags
```
This uses a custom streamlit component built by me: [GitHub](https://github.com/gagan3012/streamlit-tags)

|
{"language": "en", "license": "mit", "tags": ["keytotext", "k2t-base", "Keywords to Sentences"], "datasets": ["WebNLG", "Dart"], "metrics": ["NLG"], "thumbnail": "Keywords to Sentences"}
|
gagan3012/k2t-base
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"keytotext",
"k2t-base",
"Keywords to Sentences",
"en",
"dataset:WebNLG",
"dataset:Dart",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
# keytotext

Idea is to build a model which will take keywords as inputs and generate sentences as outputs.
### Keytotext is powered by Huggingface 🤗
[](https://pypi.org/project/keytotext/)
[](https://pepy.tech/project/keytotext)
[](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb)
[](https://share.streamlit.io/gagan3012/keytotext/UI/app.py)
## Model:
Keytotext is based on the Amazing T5 Model:
- `k2t`: [Model](https://huggingface.co/gagan3012/k2t)
- `k2t-tiny`: [Model](https://huggingface.co/gagan3012/k2t-tiny)
- `k2t-base`: [Model](https://huggingface.co/gagan3012/k2t-base)
Training Notebooks can be found in the [`Training Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Training%20Notebooks) Folder
## Usage:
Example usage: [](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb)
Example Notebooks can be found in the [`Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Examples) Folder
```
pip install keytotext
```

## UI:
UI: [](https://share.streamlit.io/gagan3012/keytotext/UI/app.py)
```
pip install streamlit-tags
```
This uses a custom streamlit component built by me: [GitHub](https://github.com/gagan3012/streamlit-tags)

|
{"language": "en", "license": "mit", "tags": ["keytotext", "k2t", "Keywords to Sentences"], "datasets": ["common_gen"], "metrics": ["NLG"], "thumbnail": "Keywords to Sentences"}
|
gagan3012/k2t-new
| null |
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"keytotext",
"k2t",
"Keywords to Sentences",
"en",
"dataset:common_gen",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
<h1 align="center">keytotext</h1>
[](https://pypi.org/project/keytotext/)
[](https://pepy.tech/project/keytotext)
[](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/notebooks/K2T.ipynb)
[](https://share.streamlit.io/gagan3012/keytotext/UI/app.py)
[](https://github.com/gagan3012/keytotext#api)
[](https://hub.docker.com/r/gagan30/keytotext)
[](https://huggingface.co/models?filter=keytotext)
[](https://keytotext.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/psf/black)

Idea is to build a model which will take keywords as inputs and generate sentences as outputs.
Potential use case can include:
- Marketing
- Search Engine Optimization
- Topic generation etc.
- Fine tuning of topic modeling models
|
{"language": "en", "license": "MIT", "tags": ["keytotext", "k2t", "Keywords to Sentences"], "datasets": ["WebNLG", "Dart"], "metrics": ["NLG"], "thumbnail": "Keywords to Sentences"}
|
gagan3012/k2t-test
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"keytotext",
"k2t",
"Keywords to Sentences",
"en",
"dataset:WebNLG",
"dataset:Dart",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
#keytotext
[](https://pypi.org/project/keytotext/)
[](https://pepy.tech/project/keytotext)
[](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/notebooks/K2T.ipynb)
[](https://share.streamlit.io/gagan3012/keytotext/UI/app.py)
[](https://github.com/gagan3012/keytotext#api)
[](https://hub.docker.com/r/gagan30/keytotext)
[](https://huggingface.co/models?filter=keytotext)
[](https://keytotext.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/psf/black)

Idea is to build a model which will take keywords as inputs and generate sentences as outputs.
Potential use case can include:
- Marketing
- Search Engine Optimization
- Topic generation etc.
- Fine tuning of topic modeling models
|
{"language": "en", "license": "MIT", "tags": ["keytotext", "k2t", "Keywords to Sentences"], "datasets": ["WebNLG", "Dart"], "metrics": ["NLG"], "thumbnail": "Keywords to Sentences"}
|
gagan3012/k2t-test3
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"keytotext",
"k2t",
"Keywords to Sentences",
"en",
"dataset:WebNLG",
"dataset:Dart",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
# keytotext

Idea is to build a model which will take keywords as inputs and generate sentences as outputs.
### Keytotext is powered by Huggingface 🤗
[](https://pypi.org/project/keytotext/)
[](https://pepy.tech/project/keytotext)
[](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb)
[](https://share.streamlit.io/gagan3012/keytotext/UI/app.py)
## Model:
Keytotext is based on the Amazing T5 Model:
- `k2t`: [Model](https://huggingface.co/gagan3012/k2t)
- `k2t-tiny`: [Model](https://huggingface.co/gagan3012/k2t-tiny)
- `k2t-base`: [Model](https://huggingface.co/gagan3012/k2t-base)
Training Notebooks can be found in the [`Training Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Training%20Notebooks) Folder
## Usage:
Example usage: [](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb)
Example Notebooks can be found in the [`Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Examples) Folder
```
pip install keytotext
```

## UI:
UI: [](https://share.streamlit.io/gagan3012/keytotext/UI/app.py)
```
pip install streamlit-tags
```
This uses a custom streamlit component built by me: [GitHub](https://github.com/gagan3012/streamlit-tags)

|
{"language": "en", "license": "mit", "tags": ["keytotext", "k2t-tiny", "Keywords to Sentences"], "datasets": ["WebNLG", "Dart"], "metrics": ["NLG"], "thumbnail": "Keywords to Sentences"}
|
gagan3012/k2t-tiny
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"keytotext",
"k2t-tiny",
"Keywords to Sentences",
"en",
"dataset:WebNLG",
"dataset:Dart",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
# keytotext

Idea is to build a model which will take keywords as inputs and generate sentences as outputs.
### Keytotext is powered by Huggingface 🤗
[](https://pypi.org/project/keytotext/)
[](https://pepy.tech/project/keytotext)
[](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb)
[](https://share.streamlit.io/gagan3012/keytotext/UI/app.py)
## Model:
Keytotext is based on the Amazing T5 Model:
- `k2t`: [Model](https://huggingface.co/gagan3012/k2t)
- `k2t-tiny`: [Model](https://huggingface.co/gagan3012/k2t-tiny)
- `k2t-base`: [Model](https://huggingface.co/gagan3012/k2t-base)
Training Notebooks can be found in the [`Training Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Training%20Notebooks) Folder
## Usage:
Example usage: [](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb)
Example Notebooks can be found in the [`Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Examples) Folder
```
pip install keytotext
```

## UI:
UI: [](https://share.streamlit.io/gagan3012/keytotext/UI/app.py)
```
pip install streamlit-tags
```
This uses a custom streamlit component built by me: [GitHub](https://github.com/gagan3012/streamlit-tags)

|
{"language": "en", "license": "mit", "tags": ["keytotext", "k2t", "Keywords to Sentences"], "datasets": ["WebNLG", "Dart"], "metrics": ["NLG"], "thumbnail": "Keywords to Sentences"}
|
gagan3012/k2t
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"keytotext",
"k2t",
"Keywords to Sentences",
"en",
"dataset:WebNLG",
"dataset:Dart",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
{}
|
gagan3012/keytotext-gpt
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
# keytotext
Idea is to build a model which will take keywords as inputs and generate sentences as outputs.
### Model:
Two Models have been built:
- Using T5-base size = 850 MB can be found here: https://huggingface.co/gagan3012/keytotext
- Using T5-small size = 230 MB can be found here: https://huggingface.co/gagan3012/keytotext-small
#### Usage:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gagan3012/keytotext-small")
model = AutoModelWithLMHead.from_pretrained("gagan3012/keytotext-small")
```
### Demo:
[](https://share.streamlit.io/gagan3012/keytotext/app.py)
https://share.streamlit.io/gagan3012/keytotext/app.py

### Example:
['India', 'Wedding'] -> We are celebrating today in New Delhi with three wedding anniversary parties.
|
{}
|
gagan3012/keytotext-small
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
# keytotext
Idea is to build a model which will take keywords as inputs and generate sentences as outputs.
### Model:
Two Models have been built:
- Using T5-base size = 850 MB can be found here: https://huggingface.co/gagan3012/keytotext
- Using T5-small size = 230 MB can be found here: https://huggingface.co/gagan3012/keytotext-small
#### Usage:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gagan3012/keytotext-small")
model = AutoModelWithLMHead.from_pretrained("gagan3012/keytotext-small")
```
### Demo:
[](https://share.streamlit.io/gagan3012/keytotext/app.py)
https://share.streamlit.io/gagan3012/keytotext/app.py

### Example:
['India', 'Wedding'] -> We are celebrating today in New Delhi with three wedding anniversary parties.
|
{}
|
gagan3012/keytotext
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "model", "results": []}]}
|
gagan3012/model
| null |
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pickuplines
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.7873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "pickuplines", "results": []}]}
|
gagan3012/pickuplines
| null |
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
{}
|
gagan3012/project-code-py-micro
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
{}
|
gagan3012/project-code-py-neo
| null |
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
# Leetcode using AI :robot:
GPT-2 Model for Leetcode Questions in python
**Note**: the Answers might not make sense in some cases because of the bias in GPT-2
**Contribtuions:** If you would like to make the model better contributions are welcome Check out [CONTRIBUTIONS.md](https://github.com/gagan3012/project-code-py/blob/master/CONTRIBUTIONS.md)
### 📢 Favour:
It would be highly motivating, if you can STAR⭐ this repo if you find it helpful.
## Model
Two models have been developed for different use cases and they can be found at https://huggingface.co/gagan3012
The model weights can be found here: [GPT-2](https://huggingface.co/gagan3012/project-code-py) and [DistilGPT-2](https://huggingface.co/gagan3012/project-code-py-small)
### Example usage:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gagan3012/project-code-py")
model = AutoModelWithLMHead.from_pretrained("gagan3012/project-code-py")
```
## Demo
[](https://share.streamlit.io/gagan3012/project-code-py/app.py)
A streamlit webapp has been setup to use the model: https://share.streamlit.io/gagan3012/project-code-py/app.py

## Example results:
### Question:
```
Write a function to delete a node in a singly-linked list. You will not be given access to the head of the list, instead you will be given access to the node to be deleted directly. It is guaranteed that the node to be deleted is not a tail node in the list.
```
### Answer:
```python
""" Write a function to delete a node in a singly-linked list. You will not be given access to the head of the list, instead you will be given access to the node to be deleted directly. It is guaranteed that the node to be deleted is not a tail node in the list.
For example,
a = 1->2->3
b = 3->1->2
t = ListNode(-1, 1)
Note: The lexicographic ordering of the nodes in a tree matters. Do not assign values to nodes in a tree.
Example 1:
Input: [1,2,3]
Output: 1->2->5
Explanation: 1->2->3->3->4, then 1->2->5[2] and then 5->1->3->4.
Note:
The length of a linked list will be in the range [1, 1000].
Node.val must be a valid LinkedListNode type.
Both the length and the value of the nodes in a linked list will be in the range [-1000, 1000].
All nodes are distinct.
"""
# Definition for singly-linked list.
# class ListNode:
# def __init__(self, x):
# self.val = x
# self.next = None
class Solution:
def deleteNode(self, head: ListNode, val: int) -> None:
"""
BFS
Linked List
:param head: ListNode
:param val: int
:return: ListNode
"""
if head is not None:
return head
dummy = ListNode(-1, 1)
dummy.next = head
dummy.next.val = val
dummy.next.next = head
dummy.val = ""
s1 = Solution()
print(s1.deleteNode(head))
print(s1.deleteNode(-1))
print(s1.deleteNode(-1))
```
|
{}
|
gagan3012/project-code-py-small
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# Leetcode using AI :robot:
GPT-2 Model for Leetcode Questions in python
**Note**: the Answers might not make sense in some cases because of the bias in GPT-2
**Contribtuions:** If you would like to make the model better contributions are welcome Check out [CONTRIBUTIONS.md](https://github.com/gagan3012/project-code-py/blob/master/CONTRIBUTIONS.md)
### 📢 Favour:
It would be highly motivating, if you can STAR⭐ this repo if you find it helpful.
## Model
Two models have been developed for different use cases and they can be found at https://huggingface.co/gagan3012
The model weights can be found here: [GPT-2](https://huggingface.co/gagan3012/project-code-py) and [DistilGPT-2](https://huggingface.co/gagan3012/project-code-py-small)
### Example usage:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gagan3012/project-code-py")
model = AutoModelWithLMHead.from_pretrained("gagan3012/project-code-py")
```
## Demo
[](https://share.streamlit.io/gagan3012/project-code-py/app.py)
A streamlit webapp has been setup to use the model: https://share.streamlit.io/gagan3012/project-code-py/app.py

## Example results:
### Question:
```
Write a function to delete a node in a singly-linked list. You will not be given access to the head of the list, instead you will be given access to the node to be deleted directly. It is guaranteed that the node to be deleted is not a tail node in the list.
```
### Answer:
```python
""" Write a function to delete a node in a singly-linked list. You will not be given access to the head of the list, instead you will be given access to the node to be deleted directly. It is guaranteed that the node to be deleted is not a tail node in the list.
For example,
a = 1->2->3
b = 3->1->2
t = ListNode(-1, 1)
Note: The lexicographic ordering of the nodes in a tree matters. Do not assign values to nodes in a tree.
Example 1:
Input: [1,2,3]
Output: 1->2->5
Explanation: 1->2->3->3->4, then 1->2->5[2] and then 5->1->3->4.
Note:
The length of a linked list will be in the range [1, 1000].
Node.val must be a valid LinkedListNode type.
Both the length and the value of the nodes in a linked list will be in the range [-1000, 1000].
All nodes are distinct.
"""
# Definition for singly-linked list.
# class ListNode:
# def __init__(self, x):
# self.val = x
# self.next = None
class Solution:
def deleteNode(self, head: ListNode, val: int) -> None:
"""
BFS
Linked List
:param head: ListNode
:param val: int
:return: ListNode
"""
if head is not None:
return head
dummy = ListNode(-1, 1)
dummy.next = head
dummy.next.val = val
dummy.next.next = head
dummy.val = ""
s1 = Solution()
print(s1.deleteNode(head))
print(s1.deleteNode(-1))
print(s1.deleteNode(-1))
```
|
{}
|
gagan3012/project-code-py
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# Generating Rap song Lyrics like Eminem Using GPT2
### I have built a custom model for it using data from Kaggle
Creating a new finetuned model using data lyrics from leading hip-hop stars
### My model can be accessed at: gagan3012/rap-writer
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gagan3012/rap-writer")
model = AutoModelWithLMHead.from_pretrained("gagan3012/rap-writer")
```
|
{}
|
gagan3012/rap-writer
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
---
Summarisation model summarsiation
|
{}
|
gagan3012/summarsiation
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.