modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
Despin89/test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - BipedalWalkerHardcore-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DDPG results: - metrics: - type: mean_reward value: -132.89 +/- 24.41 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: BipedalWalkerHardcore-v3 type: BipedalWalkerHardcore-v3 --- # **DDPG** Agent playing **BipedalWalkerHardcore-v3** This is a trained model of a **DDPG** agent playing **BipedalWalkerHardcore-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo ddpg --env BipedalWalkerHardcore-v3 -orga jackoyoungblood -f logs/ python enjoy.py --algo ddpg --env BipedalWalkerHardcore-v3 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo ddpg --env BipedalWalkerHardcore-v3 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo ddpg --env BipedalWalkerHardcore-v3 -f logs/ -orga jackoyoungblood ``` ## Hyperparameters ```python OrderedDict([('buffer_size', 200000), ('gamma', 0.98), ('gradient_steps', -1), ('learning_rate', 0.001), ('learning_starts', 10000), ('n_timesteps', 100000.0), ('noise_std', 0.1), ('noise_type', 'normal'), ('policy', 'MlpPolicy'), ('policy_kwargs', 'dict(net_arch=[400, 300])'), ('train_freq', [1, 'episode']), ('normalize', False)]) ```
Dev-DGT/food-dbert-multiling
[ "pytorch", "distilbert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "DistilBertForTokenClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
17
null
--- library_name: stable-baselines3 tags: - BipedalWalkerHardcore-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DDPG results: - metrics: - type: mean_reward value: -132.89 +/- 24.41 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: BipedalWalkerHardcore-v3 type: BipedalWalkerHardcore-v3 --- # **DDPG** Agent playing **BipedalWalkerHardcore-v3** This is a trained model of a **DDPG** agent playing **BipedalWalkerHardcore-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo ddpg --env BipedalWalkerHardcore-v3 -orga jackoyoungblood -f logs/ python enjoy.py --algo ddpg --env BipedalWalkerHardcore-v3 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo ddpg --env BipedalWalkerHardcore-v3 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo ddpg --env BipedalWalkerHardcore-v3 -f logs/ -orga jackoyoungblood ``` ## Hyperparameters ```python OrderedDict([('buffer_size', 200000), ('gamma', 0.98), ('gradient_steps', -1), ('learning_rate', 0.001), ('learning_starts', 10000), ('n_timesteps', 100000.0), ('noise_std', 0.1), ('noise_type', 'normal'), ('policy', 'MlpPolicy'), ('policy_kwargs', 'dict(net_arch=[400, 300])'), ('train_freq', [1, 'episode']), ('normalize', False)]) ```
Devid/DialoGPT-small-Miku
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: dvalbuena1/testpyramidsrnd 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Devmapall/paraphrase-quora
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
3
null
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - metrics: - type: mean_reward value: -2.84 +/- 3.71 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: quadrotor_multi type: quadrotor_multi --- A(n) **APPO** model trained on the **quadrotor_multi** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
Devrim/prism-default
[ "license:mit" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="dvalbuena1/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
DevsIA/imagenes
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en tags: - pythae - reproducibility license: apache-2.0 --- This model was trained with pythae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from pythae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="clementchadebec/reproduced_vamp") ``` ## Reproducibility This trained model reproduces the results of Table 1 in [1]. | Model | Dataset | Metric | Obtained value | Reference value | |:---:|:---:|:---:|:---:|:---:| | VAMP (K=500) | Binary MNIST | NLL (5000 IS) | 85.79 (0.00) | 85.57 | [1] Jakub Tomczak and Max Welling. Vae with a vampprior. In International Conference on Artificial Intelligence and Statistics, pages 1214–1223. PMLR, 2018.
DewiBrynJones/wav2vec2-large-xlsr-welsh
[ "cy", "dataset:common_voice", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en tags: - pythae - reproducibility library_name: pythae license: apache-2.0 --- This model was trained with pythae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from pythae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="clementchadebec/reproduced_vae") ``` ## Reproducibility This trained model reproduces the results of the VAE used in Table 1 in [1]. | Model | Dataset | Metric | Obtained value | Reference value | |:---:|:---:|:---:|:---:|:---:| | VAE | Binary MNIST | NLL (200 IS) | 89.78 (0.01) | 89.9 | [1] Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In International Conference on Machine Learning, pages 1530–1538. PMLR, 2015
DheerajPranav/Dialo-GPT-Rick-bot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - metrics: - type: mean_reward value: 658.50 +/- 131.07 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga marii -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga marii ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Dibyaranjan/nl_image_search
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
Access to model Gabesantos1007/autotrain-texto_sem_enelvo-1283649112 is restricted and you are not in the authorized list. Visit https://huggingface.co/Gabesantos1007/autotrain-texto_sem_enelvo-1283649112 to ask for access.
DiegoAlysson/opus-mt-en-ro-finetuned-en-to-ro
[ "pytorch", "tensorboard", "marian", "text2text-generation", "dataset:wmt16", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- language: - en thumbnail: "https://github.com/faGH/fa.creative/blob/master/Icons/FrostAura/FA%20Logo/FrostAura.Logo.Complex.png?raw=true" tags: - text-generation - novel-generation - fiction - gpt-neo-x - pytorch license: "mit" --- <p align="center"> <img src="https://github.com/faGH/fa.creative/blob/master/Icons/FrostAura/FA%20Logo/FrostAura.Logo.Complex.png?raw=true" width="75" title="hover text"> </p> # fa.intelligence.models.generative.novels.fiction ## Description This FrostAura Intelligence model is a fine-tuned version of [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) for fictional text content generation. ## Getting Started ### PIP Installation ``` pip install -U --no-cache-dir transformers ``` ### Usage ``` from transformers import GPTNeoXForCausalLM, GPTNeoXTokenizerFast model_name = 'FrostAura/gpt-neox-20b-fiction-novel-generation' model = GPTNeoXForCausalLM.from_pretrained(model_name) tokenizer = GPTNeoXTokenizerFast.from_pretrained(model_name) prompt = 'GPTNeoX20B is a 20B-parameter autoregressive Transformer model developed by EleutherAI.' input_ids = tokenizer(prompt, return_tensors="pt").input_ids gen_tokens = model.generate( input_ids, do_sample=True, temperature=0.9, max_length=100, ) gen_text = tokenizer.batch_decode(gen_tokens)[0] print(f'Result: {gen_text}') ``` ## Further Fine-Tuning `in development` ## Support If you enjoy FrostAura open-source content and would like to support us in continuous delivery, please consider a donation via a platform of your choice. | Supported Platforms | Link | | ------------------- | ---- | | PayPal | [Donate via Paypal](https://www.paypal.com/donate/?hosted_button_id=SVEXJC9HFBJ72) | For any queries, contact [email protected].
DiegoBalam12/institute_classification
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - metrics: - type: mean_reward value: 526.00 +/- 122.47 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dvalbuena1 -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga dvalbuena1 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Digakive/Hsgshs
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - BipedalWalker-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DDPG results: - metrics: - type: mean_reward value: 287.74 +/- 81.94 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: BipedalWalker-v3 type: BipedalWalker-v3 --- # **DDPG** Agent playing **BipedalWalker-v3** This is a trained model of a **DDPG** agent playing **BipedalWalker-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo ddpg --env BipedalWalker-v3 -orga jackoyoungblood -f logs/ python enjoy.py --algo ddpg --env BipedalWalker-v3 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo ddpg --env BipedalWalker-v3 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo ddpg --env BipedalWalker-v3 -f logs/ -orga jackoyoungblood ``` ## Hyperparameters ```python OrderedDict([('buffer_size', 200000), ('gamma', 0.98), ('gradient_steps', -1), ('learning_rate', 0.0001), ('learning_starts', 10000), ('n_timesteps', 1000000.0), ('noise_std', 0.1), ('noise_type', 'normal'), ('policy', 'MlpPolicy'), ('policy_kwargs', 'dict(net_arch=[400, 300])'), ('train_freq', [1, 'episode']), ('normalize', False)]) ```
Dilmk2/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- language: - ko tags: - roberta - tokenizer only license: - mit --- ## 라이브러리 버전 - transformers: 4.21.1 - datasets: 2.4.0 - tokenizers: 0.12.1 ## 훈련 코드 ```python from datasets import load_dataset from tokenizers import ByteLevelBPETokenizer tokenizer = ByteLevelBPETokenizer(unicode_normalizer="nfkc", trim_offsets=True) ds = load_dataset("Bingsu/my-korean-training-corpus", use_auth_token=True) # 공개된 데이터를 사용할 경우 # ds = load_dataset("cc100", lang="ko") # 50GB # 이 데이터는 35GB이고, 데이터가 너무 많으면 컴퓨터가 터져서 일부만 사용했습니다. ds_sample = ds["train"].train_test_split(0.35, seed=20220819)["test"] def gen_text(batch_size: int = 5000): for i in range(0, len(ds_sample), batch_size): yield ds_sample[i : i + batch_size]["text"] tokenizer.train_from_iterator( gen_text(), vocab_size=50265, # roberta-base와 같은 크기 min_frequency=2, special_tokens=[ "<s>", "<pad>", "</s>", "<unk>", "<mask>", ], ) tokenizer.save("my_tokenizer.json") ``` 약 7시간 소모 (i5-12600 non-k) ![image](https://i.imgur.com/LNNbtGH.png) 이후 토크나이저의 post-processor를 RobertaProcessing으로 교체합니다. ```python from tokenizers import Tokenizer from tokenizers.processors import RobertaProcessing tokenizer = Tokenizer.from_file("my_tokenizer.json") tokenizer.post_processor = RobertaProcessing( ("</s>", tokenizer.token_to_id("</s>")), ("<s>", tokenizer.token_to_id("<s>")), add_prefix_space=False, ) tokenizer.save("my_tokenizer2.json") ``` `add_prefix_space=False`옵션은 [roberta-base](https://huggingface.co/roberta-base)를 그대로 따라하기 위한 것입니다. 그리고 `model_max_length` 설정을 해주었습니다. ```python from transformers import RobertaTokenizerFast rt = RobertaTokenizerFast(tokenizer_file="tokenizer.json") rt.save_pretrained("./my_roberta_tokenizer") ``` 저장된 폴더의 `tokenizer_config.json` 파일에 `"model_max_length": 512,`를 추가. ## 사용법 #### 1. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("Bingsu/BBPE_tokenizer_test") # tokenizer는 RobertaTokenizerFast 클래스가 됩니다. ``` #### 2. `tokenizer.json`파일을 먼저 다운받습니다. ```python from transformers import BartTokenizerFast, BertTokenizerFast bart_tokenizer = BartTokenizerFast(tokenizer_file="tokenizer.json") bert_tokenizer = BertTokenizerFast(tokenizer_file="tokenizer.json") ``` roberta와 같이 BBPE를 사용한 bart는 물론이고 bert에도 불러올 수 있습니다. 다만 이렇게 불러왔을 경우, model_max_len이 지정이 되어있지 않으니 지정해야 합니다.
Dimedrolza/DialoGPT-small-cyberpunk
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: jackoyoungblood/testpyramidsrnd 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Dkwkk/W
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 --- ## Anime Segmentation Models models of [https://github.com/SkyTNT/anime-segmentation](https://github.com/SkyTNT/anime-segmentation)
Dmitriiserg/Pxd
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - relbert/semeval2012_relational_similarity_v2 model-index: - name: relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-c-nce results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8667460317460317 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6283422459893048 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6320474777448071 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7726514730405781 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.92 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.631578947368421 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6365740740740741 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9218020189844809 - name: F1 (macro) type: f1_macro value: 0.9177802775241372 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8694835680751174 - name: F1 (macro) type: f1_macro value: 0.7143050763634161 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6852654387865655 - name: F1 (macro) type: f1_macro value: 0.6719168721915184 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9568755651387633 - name: F1 (macro) type: f1_macro value: 0.8813368476884477 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.911939830774052 - name: F1 (macro) type: f1_macro value: 0.9100147353897433 --- # relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-c-nce RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity_v2](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v2). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-c-nce/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6283422459893048 - Accuracy on SAT: 0.6320474777448071 - Accuracy on BATS: 0.7726514730405781 - Accuracy on U2: 0.631578947368421 - Accuracy on U4: 0.6365740740740741 - Accuracy on Google: 0.92 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-c-nce/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9218020189844809 - Micro F1 score on CogALexV: 0.8694835680751174 - Micro F1 score on EVALution: 0.6852654387865655 - Micro F1 score on K&H+N: 0.9568755651387633 - Micro F1 score on ROOT09: 0.911939830774052 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-c-nce/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8667460317460317 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-c-nce") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: relbert/semeval2012_relational_similarity_v2 - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <mask> - loss_function: nce_logout - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 29 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-c-nce/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
Waynehillsdev/waynehills_sentimental_kor
[ "pytorch", "electra", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "ElectraForSequenceClassification" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
33
null
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/ny7777/ddpm-butterflies-128/tensorboard?#scalars)
Doxophobia/DialoGPT-medium-celeste
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/pokemon metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-pokemon-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/pokemon` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 300 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/ny7777/ddpm-pokemon-128/tensorboard?#scalars)
DoyyingFace/bert-asian-hate-tweets-asian-clean-with-unclean-valid
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
29
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-xlsr-53-torgo-8batch-30epochs-500steps results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xlsr-53-torgo-8batch-30epochs-500steps This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0747 - Cer: 0.3015 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 25.2161 | 2.53 | 500 | 5.2319 | 0.9816 | | 3.649 | 5.05 | 1000 | 3.7028 | 0.9816 | | 2.8241 | 7.58 | 1500 | 2.0426 | 0.8802 | | 1.6018 | 10.1 | 2000 | 1.2611 | 0.4394 | | 1.0374 | 12.63 | 2500 | 1.1117 | 0.3713 | | 0.7962 | 15.15 | 3000 | 1.0730 | 0.3445 | | 0.6436 | 17.68 | 3500 | 1.1026 | 0.3309 | | 0.5344 | 20.2 | 4000 | 1.0734 | 0.3100 | | 0.4556 | 22.73 | 4500 | 1.0877 | 0.3177 | | 0.3992 | 25.25 | 5000 | 1.0614 | 0.3065 | | 0.3764 | 27.78 | 5500 | 1.0747 | 0.3015 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.12.1+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
DoyyingFace/bert-asian-hate-tweets-asian-unclean-freeze-8
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - custom_squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the custom_squad dataset. It achieves the following results on the evaluation set: - Loss: 1.2055 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.2702 | 1.0 | 5533 | 1.2055 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
DoyyingFace/bert-asian-hate-tweets-asian-unclean-slanted
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
29
null
--- language: - nl license: apache-2.0 tags: - summarization - longt5 - seq2seq datasets: - yhavinga/mc4_nl_cleaned - yhavinga/cnn_dailymail_dutch pipeline_tag: summarization widget: - text: 'Het Van Goghmuseum in Amsterdam heeft vier kostbare prenten verworven van Mary Cassatt, de Amerikaanse impressionistische kunstenaar en tijdgenoot van Vincent van Gogh. Dat heeft het museum woensdagmiddag op een persconferentie bekendgemaakt. Het gaat om drie grote kleurenetsen en een zwart-wit litho met voorstellingen van vrouwen. Voor deze prenten, die afkomstig zijn van een Amerikaanse verzamelaar, betaalde het museum ruim 1,4 miljoen euro. Drie grote fondsen en een aantal particulieren hebben samen de aankoopsom beschikbaar gesteld. Mary Stevenson Cassatt (1844-1926) woonde en werkte lange tijd in Frankrijk. Ze staat met haar impressionistische schilderijen en tekeningen te boek als een van de vernieuwers van de Parijse kunstwereld in de late negentiende eeuw. Het Van Goghmuseum rekent haar prenten „tot het mooiste wat op grafisch gebied in het fin de siècle is geproduceerd”. De drie aangekochte kleurenetsen – Het doorpassen, De brief en Badende vrouw – komen uit een serie van tien waarmee Cassatt haar naam als (prent)kunstenaar definitief vestigde. Ze maakte de etsen na een bezoek in 1890 aan een tentoonstelling van Japanse prenten in Parijs. Over die expositie schreef de Amerikaanse aan haar vriendin Berthe Morisot, een andere vrouwelijke impressionist: „We kunnen de Japanse prenten in de Beaux-Arts gaan bekijken. Echt, die mag je niet missen. Als je kleurenprenten wilt maken, is er niets mooiers voorstelbaar. Ik droom ervan en denk nergens anders meer aan dan aan kleur op koper.' - text: 'Afgelopen zaterdagochtend werden Hunga Tonga en Hunga Hapai opnieuw twee aparte eilanden toen de vulkaan met een hevige explosie uitbarstte. De aanloop tot de uitbarsting begon al eind vorig jaar met kleinere explosies. Begin januari nam de activiteit af en dachten geologen dat de vulkaan tot rust was gekomen. Toch barstte hij afgelopen zaterdag opnieuw uit, veel heviger dan de uitbarstingen ervoor. Vlák voor deze explosie stortte het kilometerslange verbindingsstuk in en verdween onder het water. De eruptie duurde acht minuten. De wolk van as en giftige gasdeeltjes, zoals zwaveloxide, die daarbij vrijkwam, reikte tot dertig kilometer hoogte en was zo’n vijfhonderd kilometer breed. Ter vergelijking: de pluimen uit de recente vulkaanuitbarsting op La Palma reikten maximaal zo’n vijf kilometer hoog. De hoofdstad van Tonga, vijfenzestig kilometer verderop is bedekt met een dikke laag as. Dat heeft bijvoorbeeld gevolgen voor de veiligheid van het drinkwater op Tonga. De uitbarsting van de onderzeese vulkaan in de eilandstaat Tonga afgelopen zaterdag was bijzonder heftig. De eruptie veroorzaakte een tsunami die reikte van Nieuw-Zeeland tot de Verenigde Staten en in Nederland ging de luchtdruk omhoog. Geologen verwachten niet dat de vulkaan op Tonga voor een lange wereldwijde afkoeling zorgt, zoals bij andere hevige vulkaanuitbarstingen het geval is geweest. De vulkaan ligt onder water tussen de onbewoonde eilandjes Hunga Tonga (0,39 vierkante kilometer) en Hunga Ha’apai (0,65 vierkante kilometer). Magma dat bij kleinere uitbarsting in 2009 en 2014 omhoog kwam, koelde af en vormde een verbindingsstuk tussen de twee eilanden in. Een explosie van een onderwatervulkaan als die bij Tonga is heftiger dan bijvoorbeeld die uitbarsting op La Palma. „Dat komt doordat het vulkanisme hier veroorzaakt wordt door subductie: de Pacifische plaat zinkt onder Tonga de aardmantel in en neemt water mee omlaag”, zegt hoogleraar paleogeografie Douwe van Hinsbergen van de Universiteit Utrecht. „Dit water komt met magma als gas, als waterdamp, mee omhoog. Dat voert de druk onder de aardkost enorm op. Arwen Deuss, geowetenschapper aan de Universiteit Utrecht, vergelijkt het met een fles cola. „Wanneer je een fles cola schudt, zal het gas er met veel geweld uitkomen. Dat is waarschijnlijk wat er gebeurd is op Tonga, maar we weten het niet precies.”' model-index: - name: yhavinga/long-t5-tglobal-small-dutch-cnn-bf16-test results: - task: type: summarization name: Summarization dataset: name: yhavinga/cnn_dailymail_dutch type: yhavinga/cnn_dailymail_dutch config: 3.0.0 split: test metrics: - type: rouge value: 38.8539 name: ROUGE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTY5ZmNlMDcxZGE2NDU4MmQ2YzEyMTFmNWI5MGNlZTY4MmE3NWZjMzU4ZTRlZjAyYWViMTA2OWUwZjFjZjZlYSIsInZlcnNpb24iOjF9.LxT4sfu6vASdl8oKPLc_6hjwUXRFBhQ96pAeA5bVVjTyimqJcBmc44rYIVesIoY8Oxjq35fZ9mgVvJTWGFtWDQ - type: rouge value: 15.6702 name: ROUGE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGZkODIwODMxMWQxMzc2OThkYzU4M2UzNDBlNGZjYmE5ZDE1MTU4NDIxZDJhNGU3NzhhMTk1Zjg2YWZjZmU0NCIsInZlcnNpb24iOjF9.M7qs9ox5wMsaWBtK4Db54CNfosuA2L114bukL6PDCG9cy6ObuU4s1UHqIS_Q-0sDXZ5l3gEenGqKaMRod3NOCg - type: rouge value: 25.7053 name: ROUGE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmNlNTY5NTY1MDhiMDY3ZmFlNWU2ZmFkOWEwZmNiYzNmNDU0MmFjODEzYjJkNGQ4ZGVlMzlmMjk3OTE1MDljNyIsInZlcnNpb24iOjF9.2dQVLYKfUye_enR6T4xF61bJtZ5-TeZM13wiogoUVloYqTARLLxxTPGUAubBxyv2nYz3io5Xm5l_rnzfzI81Ag - type: rouge value: 35.5005 name: ROUGE-LSUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjU1ZDYxY2ViOTc3ODE3MzQ4ZjUwZjM1MmEzMTU0OTZhZGYzYWNhNGRjYTBlOWJiMDFhOTQ1MjkyZDkwZDhjYSIsInZlcnNpb24iOjF9.mgMBzHHczYZ_NFxNZ5-jG9y32ZYSsWzQmX6Kdqw_v5wIFGc_Nz7VUIaiGjwqLIwFnOgNhtiF3NuvtvaP3_XgAw - type: loss value: 1.766959547996521 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjY3MGIyYzBmNDcwOTlhZGM3MGE3MjA5NTE4ZTkyMWViNjc4MWJkYjgyMzE4YzU4NTRjNDA4NjNiMjRhNDhlOCIsInZlcnNpb24iOjF9.6IIeFmy95UMQvf_IzTfUO3bkkNs0rDPZJWD4jZdinTjHvY4mmgey-mEY7yI2yrV1tdXrSKJn6ynvIx3TlE9RDA - type: gen_len value: 107.1673 name: gen_len verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjU1ZTU5Nzg1ZDIyNWQ2MDMwNmRkNmQ1NWRhZTdlNmVhZGZjZDEzN2JkZGJjNzA5NDI4NzE1MTljNzQ1ZTk1ZCIsInZlcnNpb24iOjF9.-IiLbsGrV2KVeJGyLm7wGg-eBtnPJdArWgtveybRTjQn6Lb4Nol3dGMo8UefJNg5UJ018U2HnPptnmPRt-R3Dg --- # long-t5-tglobal-small-dutch-cnn-bf16-test See logs at https://wandb.ai/yepster/long-t5-tglobal-small-dutch-cnn/runs/1qmed8ll?workspace=user-yepster
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-100
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: creativeml-openrail-m tags: - stable-diffusion - text-to-image library_name: "stable-diffusion" inference: false extra_gated_prompt: |- One more step before getting this model. This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. CompVis claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here: https://huggingface.co/spaces/CompVis/stable-diffusion-license By clicking on "Access repository" below, you accept that your *contact information* (email address and username) can be shared with the model authors as well. extra_gated_fields: I have read the License and agree with its terms: checkbox --- Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. The **Stable-Diffusion-v-1-4** checkpoint was initialized with the weights of the [Stable-Diffusion-v-1-2](https://steps/huggingface.co/CompVis/stable-diffusion-v-1-2-original) checkpoint and subsequently fine-tuned on 225k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). #### Download the weights - [sd-v1-4.ckpt](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt) - [sd-v1-4-full-ema.ckpt](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4-full-ema.ckpt) These weights are intended to be used with the original [CompVis Stable Diffusion codebase](https://github.com/CompVis/stable-diffusion). If you are looking for the model to use with the D🧨iffusers library, [come here](https://huggingface.co/CompVis/stable-diffusion-v1-4). ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487). - **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material and is not fit for product use without additional safety mechanisms and considerations. - No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data. The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images. ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are primarily limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. ## Training **Training Data** The model developers used the following dataset for training the model: - LAION-2B (en) and subsets thereof (see next section) **Training Procedure** Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through a ViT-L/14 text-encoder. - The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We currently provide three checkpoints, `sd-v1-1.ckpt`, `sd-v1-2.ckpt` and `sd-v1-3.ckpt`, which were trained as follows, - `sd-v1-1.ckpt`: 237k steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en). 194k steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`). - `sd-v1-2.ckpt`: Resumed from `sd-v1-1.ckpt`. 515k steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)). - `sd-v1-3.ckpt`: Resumed from `sd-v1-2.ckpt`. 195k steps at resolution `512x512` on "laion-improved-aesthetics" and 10\% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 2 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant ## Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling steps show the relative improvements of the checkpoints: ![pareto](https://huggingface.co/CompVis/stable-diffusion/resolve/main/v1-variants-scores.jpg) Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 150000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq. ## Citation ```bibtex @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ``` *This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-25
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.927 - name: F1 type: f1 value: 0.9271792499777299 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2125 - Accuracy: 0.927 - F1: 0.9272 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8137 | 1.0 | 250 | 0.2974 | 0.912 | 0.9095 | | 0.244 | 2.0 | 500 | 0.2125 | 0.927 | 0.9272 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-50
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: rebolforces/testpyramidsrnd 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀 ### Stats ``` "final_checkpoint": { "steps": 3000022, "file_path": "results/Pyramids Training2/Pyramids.onnx", "reward": 1.87466665605704, "creation_time": 1660985715.2452054, "auxillary_file_paths": [ "results/Pyramids Training2/Pyramids/Pyramids-3000022.pt" ] } ```
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-75
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
37
2022-08-20T09:19:41Z
--- datasets: - relbert/semeval2012_relational_similarity_v2 model-index: - name: relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-d-nce results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8204761904761905 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6737967914438503 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6676557863501483 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7698721511951084 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.906 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6535087719298246 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6504629629629629 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9218020189844809 - name: F1 (macro) type: f1_macro value: 0.9191478903594151 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.857981220657277 - name: F1 (macro) type: f1_macro value: 0.6986679597011488 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6912242686890574 - name: F1 (macro) type: f1_macro value: 0.6803154613622127 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9585448981011337 - name: F1 (macro) type: f1_macro value: 0.8855707493953529 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9169539329363836 - name: F1 (macro) type: f1_macro value: 0.9153488154400948 --- # relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-d-nce RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity_v2](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v2). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-d-nce/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6737967914438503 - Accuracy on SAT: 0.6676557863501483 - Accuracy on BATS: 0.7698721511951084 - Accuracy on U2: 0.6535087719298246 - Accuracy on U4: 0.6504629629629629 - Accuracy on Google: 0.906 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-d-nce/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9218020189844809 - Micro F1 score on CogALexV: 0.857981220657277 - Micro F1 score on EVALution: 0.6912242686890574 - Micro F1 score on K&H+N: 0.9585448981011337 - Micro F1 score on ROOT09: 0.9169539329363836 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-d-nce/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8204761904761905 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-d-nce") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: relbert/semeval2012_relational_similarity_v2 - template_mode: manual - template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <subj> is the <mask> of <obj> - loss_function: nce_logout - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 29 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-d-nce/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
DoyyingFace/bert-asian-hate-tweets-asonam-unclean
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- language: - zh license: apache-2.0 tags: - DeBERTa - CWS - Chinese Word Segmentation - Chinese inference: false --- # Erlangshen-DeBERTa-v2-97M-CWS-Chinese - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM) - Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/) ## 简介 Brief Introduction 善于处理NLU任务,采用中文分词的,中文版的0.97亿参数DeBERTa-v2-Base。 Good at solving NLU tasks, adopting Chinese Word Segmentation (CWS), Chinese DeBERTa-v2-Base with 97M parameters. ## 模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | | :----: | :----: | :----: | :----: | :----: | :----: | | 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | DeBERTa-v2 | 97M | 中文分词-中文 CWS-Chinese | ## 模型信息 Model Information 参考论文:[DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://readpaper.com/paper/3033187248) 为了得到一个中文版的DeBERTa-v2-Base(97M),我们用悟道语料库(180G版本)进行预训练。我们使用了中文分词。具体地,我们在预训练阶段中使用了[封神框架](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen)大概花费了24张A100约7天。 To get a Chinese DeBERTa-v2-Base (97M), we use WuDao Corpora (180 GB version) for pre-training. We employ Chinese Word Segmentation (CWS). Specifically, we use the [fengshen framework](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen) in the pre-training phase which cost about 7 days with 24 A100 GPUs. ## 使用 Usage ```python from transformers import AutoModelForMaskedLM, AutoTokenizer, FillMaskPipeline import torch tokenizer=AutoTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-DeBERTa-v2-97M-CWS-Chinese', use_fast=False) model=AutoModelForMaskedLM.from_pretrained('IDEA-CCNL/Erlangshen-DeBERTa-v2-97M-CWS-Chinese') text = '生活的真谛是[MASK]。' fillmask_pipe = FillMaskPipeline(model, tokenizer, device=7) print(fillmask_pipe(text, top_k=10)) ``` ## 引用 Citation 如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970): If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970): ```text @article{fengshenbang, author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen}, title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence}, journal = {CoRR}, volume = {abs/2209.02970}, year = {2022} } ``` 也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/): You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/): ```text @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
DoyyingFace/bert-asian-hate-tweets-concat-clean
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
null
--- language: - nl - en - multilingual license: apache-2.0 tags: - byt5 - translation - seq2seq datasets: - yhavinga/mc4_nl_cleaned - yhavinga/ccmatrix pipeline_tag: translation widget: - text: 'It is a painful and tragic spectacle that rises before me: I have drawn back the curtain from the rottenness of man. This word, in my mouth, is at least free from one suspicion: that it involves a moral accusation against humanity.' - text: Young Wehling was hunched in his chair, his head in his hand. He was so rumpled, so still and colorless as to be virtually invisible. His camouflage was perfect, since the waiting room had a disorderly and demoralized air, too. Chairs and ashtrays had been moved away from the walls. The floor was paved with spattered dropcloths. --- Logs at https://wandb.ai/yepster/byt5-small-ccmatrix-en-nl/runs/1wm9igj9?workspace=user-yepster
albert-base-v2
[ "pytorch", "tf", "jax", "rust", "safetensors", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4,785,283
2022-08-20T11:44:12Z
--- language: - en tags: - zero-shot-image-classification license: mit datasets: - coco2017 --- # Tiny CLIP ## Introduction This is a smaller version of CLIP trained for EN only. The training script can be found [here](https://www.kaggle.com/code/sachin/tiny-en-clip/). This model is roughly 8 times smaller than CLIP. This was achieved by using a small text model (`microsoft/xtremedistil-l6-h256-uncased`) and a small vision model (`edgenext_small`). For a in-depth guide of training CLIP see [this blog](https://sachinruk.github.io/blog/pytorch/pytorch%20lightning/loss%20function/gpu/2021/03/07/CLIP.html). ## Usage For now this is the recommended way to use this model ``` git lfs install git clone https://huggingface.co/sachin/tiny_clip cd tiny_clip ``` Once you are in the folder you could do the following: ```python import models text_encoder, tokenizer, vision_encoder, transform = models.get_model() ```
albert-large-v1
[ "pytorch", "tf", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
687
2022-08-20T12:17:24Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8648740833380706 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1365 - F1: 0.8649 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 | | 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 | | 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
albert-xlarge-v1
[ "pytorch", "tf", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
341
2022-08-20T12:30:03Z
--- tags: - conversational --- # Harry Potter DialoGPT Model
albert-xxlarge-v1
[ "pytorch", "tf", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7,091
2022-08-20T13:26:17Z
--- license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image widget: - text: "A high tech solarpunk utopia in the Amazon rainforest" example_title: Amazon rainforest - text: "A pikachu fine dining with a view to the Eiffel Tower" example_title: Pikachu in Paris - text: "A mecha robot in a favela in expressionist style" example_title: Expressionist robot - text: "an insect robot preparing a delicious meal" example_title: Insect robot - text: "A small cabin on top of a snowy mountain in the style of Disney, artstation" example_title: Snowy disney cabin extra_gated_prompt: |- This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license extra_gated_heading: Please read the LICENSE to access this model --- # Stable Diffusion v1-4 Model Card Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion with 🧨Diffusers blog](https://huggingface.co/blog/stable_diffusion). The **Stable-Diffusion-v1-4** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2) checkpoint and subsequently fine-tuned on 225k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). This weights here are intended to be used with the 🧨 Diffusers library. If you are looking for the weights to be loaded into the CompVis Stable Diffusion codebase, [come here](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original) ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487). - **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ## Examples We recommend using [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion. ### PyTorch ```bash pip install --upgrade diffusers transformers scipy ``` Running the pipeline with the default PNDM scheduler: ```python import torch from diffusers import StableDiffusionPipeline model_id = "CompVis/stable-diffusion-v1-4" device = "cuda" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to(device) prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` **Note**: If you are limited by GPU memory and have less than 4GB of GPU RAM available, please make sure to load the StableDiffusionPipeline in float16 precision instead of the default float32 precision as done above. You can do so by telling diffusers to expect the weights to be in float16 precision: ```py import torch pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to(device) pipe.enable_attention_slicing() prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` To swap out the noise scheduler, pass it to `from_pretrained`: ```python from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler model_id = "CompVis/stable-diffusion-v1-4" # Use the Euler scheduler here instead scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler") pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` ### JAX/Flax To use StableDiffusion on TPUs and GPUs for faster inference you can leverage JAX/Flax. Running the pipeline with default PNDMScheduler ```python import jax import numpy as np from flax.jax_utils import replicate from flax.training.common_utils import shard from diffusers import FlaxStableDiffusionPipeline pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( "CompVis/stable-diffusion-v1-4", revision="flax", dtype=jax.numpy.bfloat16 ) prompt = "a photo of an astronaut riding a horse on mars" prng_seed = jax.random.PRNGKey(0) num_inference_steps = 50 num_samples = jax.device_count() prompt = num_samples * [prompt] prompt_ids = pipeline.prepare_inputs(prompt) # shard inputs and rng params = replicate(params) prng_seed = jax.random.split(prng_seed, num_samples) prompt_ids = shard(prompt_ids) images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) ``` **Note**: If you are limited by TPU memory, please make sure to load the `FlaxStableDiffusionPipeline` in `bfloat16` precision instead of the default `float32` precision as done above. You can do so by telling diffusers to load the weights from "bf16" branch. ```python import jax import numpy as np from flax.jax_utils import replicate from flax.training.common_utils import shard from diffusers import FlaxStableDiffusionPipeline pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( "CompVis/stable-diffusion-v1-4", revision="bf16", dtype=jax.numpy.bfloat16 ) prompt = "a photo of an astronaut riding a horse on mars" prng_seed = jax.random.PRNGKey(0) num_inference_steps = 50 num_samples = jax.device_count() prompt = num_samples * [prompt] prompt_ids = pipeline.prepare_inputs(prompt) # shard inputs and rng params = replicate(params) prng_seed = jax.random.split(prng_seed, num_samples) prompt_ids = shard(prompt_ids) images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) ``` # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material and is not fit for product use without additional safety mechanisms and considerations. - No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data. The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images. ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are primarily limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. ### Safety Module The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers. This checker works by checking model outputs against known hard-coded NSFW concepts. The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter. Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images. The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept. ## Training **Training Data** The model developers used the following dataset for training the model: - LAION-2B (en) and subsets thereof (see next section) **Training Procedure** Stable Diffusion v1-4 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through a ViT-L/14 text-encoder. - The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We currently provide four checkpoints, which were trained as follows. - [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en). 194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`). - [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`. 515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)). - [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2`. 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2`.225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 2 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant ## Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling steps show the relative improvements of the checkpoints: ![pareto](https://huggingface.co/CompVis/stable-diffusion/resolve/main/v1-variants-scores.jpg) Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 150000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq. ## Citation ```bibtex @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ``` *This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
albert-xxlarge-v2
[ "pytorch", "tf", "safetensors", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
42,640
2022-08-20T13:28:20Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1608 - F1: 0.8593 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2888 | 1.0 | 715 | 0.1779 | 0.8233 | | 0.1437 | 2.0 | 1430 | 0.1570 | 0.8497 | | 0.0931 | 3.0 | 2145 | 0.1608 | 0.8593 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
bert-base-cased-finetuned-mrpc
[ "pytorch", "tf", "jax", "bert", "fill-mask", "transformers", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11,644
2022-08-20T13:31:36Z
--- language: - en thumbnail: "https://github.com/faGH/fa.creative/blob/master/Icons/FrostAura/FA%20Logo/FrostAura.Logo.Complex.png?raw=true" tags: - text-generation - novel-generation - fiction - gpt-neo - pytorch license: "mit" --- <p align="center"> <img src="https://github.com/faGH/fa.creative/blob/master/Icons/FrostAura/FA%20Logo/FrostAura.Logo.Complex.png?raw=true" width="75" title="hover text"> </p> # fa.intelligence.models.generative.novels.fiction ## Description This FrostAura Intelligence model is a fine-tuned version of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) for fictional text content generation. ## Getting Started ### PIP Installation ``` pip install -U --no-cache-dir transformers ``` ### Usage ``` from transformers import pipeline model_name: str = 'FrostAura/gpt-neo-1.3B-fiction-novel-generation' generator: pipeline = pipeline('text-generation', model=model_name) prompt: str = 'So far my day has been ' gen_text: str = generator(prompt, do_sample=True, min_length=50) print(f'Result: {gen_text}') ``` ## Further Fine-Tuning [in development](https://github.com/dredwardhyde/gpt-neo-fine-tuning-example/blob/main/gpt_neo.py) ## Support If you enjoy FrostAura open-source content and would like to support us in continuous delivery, please consider a donation via a platform of your choice. | Supported Platforms | Link | | ------------------- | ---- | | PayPal | [Donate via Paypal](https://www.paypal.com/donate/?hosted_button_id=SVEXJC9HFBJ72) | For any queries, contact [email protected].
bert-base-chinese
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "zh", "arxiv:1810.04805", "transformers", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3,377,486
2022-08-20T13:43:58Z
important_labels = { "no_relation":"관계 없음", "per:employee_of":"고용", "org:member_of":"소속", "org:place_of_headquarters":"장소", "org:top_members/employees":"대표", "per:origin":"출신", "per:title":"직업", "per:colleagues":"동료", "org:members":"소속", "org:alternate_names":"본명", "per:place_of_residence":"거주지" } https://colab.research.google.com/drive/1K3lygU6BBLsFwI99JNaX8BauH7vgUsv9?authuser=1#scrollTo=h8-68Ko_pKpJ
bert-base-german-cased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "de", "transformers", "exbert", "license:mit", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
175,983
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.927 - name: F1 type: f1 value: 0.9269716778589558 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2152 - Accuracy: 0.927 - F1: 0.9270 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8354 | 1.0 | 250 | 0.3134 | 0.9065 | 0.9050 | | 0.2478 | 2.0 | 500 | 0.2152 | 0.927 | 0.9270 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
bert-base-german-dbmdz-cased
[ "pytorch", "jax", "bert", "fill-mask", "de", "transformers", "license:mit", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,814
2022-08-20T14:24:24Z
--- datasets: - relbert/semeval2012_relational_similarity_v2 model-index: - name: relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-e-nce results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.911547619047619 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5935828877005348 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6023738872403561 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7498610339077265 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.868 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.618421052631579 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6365740740740741 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9190899502787404 - name: F1 (macro) type: f1_macro value: 0.9137760457433256 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.854225352112676 - name: F1 (macro) type: f1_macro value: 0.6960792498811619 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6738894907908992 - name: F1 (macro) type: f1_macro value: 0.6683142084374337 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9637615636085414 - name: F1 (macro) type: f1_macro value: 0.890107974704234 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9053588216859918 - name: F1 (macro) type: f1_macro value: 0.9023263285944801 --- # relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-e-nce RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity_v2](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v2). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-e-nce/raw/main/analogy.json)): - Accuracy on SAT (full): 0.5935828877005348 - Accuracy on SAT: 0.6023738872403561 - Accuracy on BATS: 0.7498610339077265 - Accuracy on U2: 0.618421052631579 - Accuracy on U4: 0.6365740740740741 - Accuracy on Google: 0.868 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-e-nce/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9190899502787404 - Micro F1 score on CogALexV: 0.854225352112676 - Micro F1 score on EVALution: 0.6738894907908992 - Micro F1 score on K&H+N: 0.9637615636085414 - Micro F1 score on ROOT09: 0.9053588216859918 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-e-nce/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.911547619047619 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-e-nce") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: relbert/semeval2012_relational_similarity_v2 - template_mode: manual - template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <obj> is <subj>’s <mask> - loss_function: nce_logout - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 21 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-e-nce/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
bert-base-german-dbmdz-uncased
[ "pytorch", "jax", "safetensors", "bert", "fill-mask", "de", "transformers", "license:mit", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
68,305
2022-08-20T14:24:34Z
--- language: - pt thumbnail: "Portugues BERT for the Legal Domain" pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - transformers datasets: - assin - assin2 - stsb_multi_mt - rufimelo/PortugueseLegalSentences-v0 widget: - source_sentence: "O advogado apresentou as provas ao juíz." sentences: - "O juíz leu as provas." - "O juíz leu o recurso." - "O juíz atirou uma pedra." example_title: "Example 1" model-index: - name: BERTimbau results: - task: name: STS type: STS metrics: - name: Pearson Correlation - assin Dataset type: Pearson Correlation value: 0.74874 - name: Pearson Correlation - assin2 Dataset type: Pearson Correlation value: 0.79532 - name: Pearson Correlation - stsb_multi_mt pt Dataset type: Pearson Correlation value: 0.82254 --- # rufimelo/Legal-BERTimbau-sts-base-ma This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. rufimelo/rufimelo/Legal-BERTimbau-sts-base-ma is based on Legal-BERTimbau-base which derives from [BERTimbau](https://huggingface.co/neuralmind/bert-large-portuguese-cased) alrge. It is adapted to the Portuguese legal domain and trained for STS on portuguese datasets. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Isto é um exemplo", "Isto é um outro exemplo"] model = SentenceTransformer('rufimelo/Legal-BERTimbau-sts-base-ma') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('rufimelo/Legal-BERTimbau-sts-base-ma') model = AutoModel.from_pretrained('rufimelo/Legal-BERTimbau-sts-base-ma') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results STS | Model| Assin | Assin2|stsb_multi_mt pt| avg| | ---------------------------------------- | ---------- | ---------- |---------- |---------- | | Legal-BERTimbau-sts-base| 0.71457| 0.73545 | 0.72383|0.72462| | Legal-BERTimbau-sts-base-ma| 0.74874 | 0.79532|0.82254 |0.78886| | Legal-BERTimbau-sts-base-ma-v2| 0.75481 | 0.80262|0.82178|0.79307| | Legal-BERTimbau-base-TSDAE-sts|0.78814 |0.81380 |0.75777|0.78657| | Legal-BERTimbau-sts-large| 0.76629| 0.82357 | 0.79120|0.79369| | Legal-BERTimbau-sts-large-v2| 0.76299 | 0.81121|0.81726 |0.79715| | Legal-BERTimbau-sts-large-ma| 0.76195| 0.81622 | 0.82608|0.80142| | Legal-BERTimbau-sts-large-ma-v2| 0.7836| 0.8462| 0.8261| 0.81863| | Legal-BERTimbau-sts-large-ma-v3| 0.7749| **0.8470**| 0.8364| **0.81943**| | Legal-BERTimbau-large-v2-sts| 0.71665| 0.80106| 0.73724| 0.75165| | Legal-BERTimbau-large-TSDAE-sts| 0.72376| 0.79261| 0.73635| 0.75090| | Legal-BERTimbau-large-TSDAE-sts-v2| 0.81326| 0.83130| 0.786314| 0.81029| | Legal-BERTimbau-large-TSDAE-sts-v3|0.80703 |0.82270 |0.77638 |0.80204 | | ---------------------------------------- | ---------- |---------- |---------- |---------- | | BERTimbau base Fine-tuned for STS|**0.78455** | 0.80626|0.82841|0.80640| | BERTimbau large Fine-tuned for STS|0.78193 | 0.81758|0.83784|0.81245| | ---------------------------------------- | ---------- |---------- |---------- |---------- | | paraphrase-multilingual-mpnet-base-v2| 0.71457| 0.79831 |0.83999 |0.78429| | paraphrase-multilingual-mpnet-base-v2 Fine-tuned with assin(s)| 0.77641|0.79831 |**0.84575**|0.80682| ## Training rufimelo/Legal-BERTimbau-sts-base-ma is based on Legal-BERTimbau-base which derives from [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) base. Firstly, due to the lack of portuguese datasets, it was trained using multilingual knowledge distillation. For the Multilingual Knowledge Distillation process, the teacher model was 'sentence-transformers/paraphrase-xlm-r-multilingual-v1', the supposed supported language as English and the language to learn was portuguese. It was trained for Semantic Textual Similarity, being submitted to a fine tuning stage with the [assin](https://huggingface.co/datasets/assin), [assin2](https://huggingface.co/datasets/assin2) and [stsb_multi_mt pt](https://huggingface.co/datasets/stsb_multi_mt) datasets. ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors ## Citing & Authors If you use this work, please cite: ```bibtex @inproceedings{souza2020bertimbau, author = {F{\'a}bio Souza and Rodrigo Nogueira and Roberto Lotufo}, title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese}, booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)}, year = {2020} } @inproceedings{fonseca2016assin, title={ASSIN: Avaliacao de similaridade semantica e inferencia textual}, author={Fonseca, E and Santos, L and Criscuolo, Marcelo and Aluisio, S}, booktitle={Computational Processing of the Portuguese Language-12th International Conference, Tomar, Portugal}, pages={13--15}, year={2016} } @inproceedings{real2020assin, title={The assin 2 shared task: a quick overview}, author={Real, Livy and Fonseca, Erick and Oliveira, Hugo Goncalo}, booktitle={International Conference on Computational Processing of the Portuguese Language}, pages={406--412}, year={2020}, organization={Springer} } @InProceedings{huggingface:dataset:stsb_multi_mt, title = {Machine translated multilingual STS benchmark dataset.}, author={Philip May}, year={2021}, url={https://github.com/PhilipMay/stsb-multi-mt} } ```
bert-base-multilingual-uncased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
328,585
2022-08-20T14:33:30Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.8346456692913387 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2763 - F1: 0.8346 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5779 | 1.0 | 191 | 0.3701 | 0.7701 | | 0.2735 | 2.0 | 382 | 0.2908 | 0.8254 | | 0.1769 | 3.0 | 573 | 0.2763 | 0.8346 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
bert-large-cased-whole-word-masking-finetuned-squad
[ "pytorch", "tf", "jax", "rust", "safetensors", "bert", "question-answering", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8,214
2022-08-20T14:54:19Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.it metrics: - name: F1 type: f1 value: 0.8124233755619126 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2630 - F1: 0.8124 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8193 | 1.0 | 70 | 0.3200 | 0.7356 | | 0.2773 | 2.0 | 140 | 0.2841 | 0.7882 | | 0.1807 | 3.0 | 210 | 0.2630 | 0.8124 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
bert-large-cased-whole-word-masking
[ "pytorch", "tf", "jax", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2,316
2022-08-20T14:56:55Z
--- language: - en tags: - pytorch - text-generation - causal-lm - rwkv license: apache-2.0 datasets: - the_pile --- # RWKV-4 1.5B # Use RWKV-4 models (NOT RWKV-4a, NOT RWKV-4b) unless you know what you are doing. # Use RWKV-4 models (NOT RWKV-4a, NOT RWKV-4b) unless you know what you are doing. # Use RWKV-4 models (NOT RWKV-4a, NOT RWKV-4b) unless you know what you are doing. ## Model Description RWKV-4 1.5B is a L24-D2048 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details. Use https://github.com/BlinkDL/ChatRWKV to run it. ctx_len = 1024 n_layer = 24 n_embd = 2048 RWKV-4-Pile-1B5-20220929-ctx4096.pth : Fine-tuned to ctx_len 4096. * Likely better when ctxlen > 100. Please test. RWKV-4-Pile-1B5-20220903-8040.pth : Trained on the Pile for 332B tokens. * Pile loss 2.0415 * LAMBADA ppl 7.04, acc 56.43% * PIQA acc 72.36% * SC2016 acc 68.73% * Hellaswag acc_norm 52.48% ### Instruct-test models: only useful if you construct your prompt following dataset templates Note I am using "Q: instruct\n\nA: result" prompt for all instructs. RWKV-4-Pile-1B5-Instruct-test1-20230124.pth instruct-tuned on https://huggingface.co/datasets/bigscience/xP3all/viewer/en/train RWKV-4-Pile-1B5-Instruct-test2-20230209.pth instruct-tuned on https://huggingface.co/datasets/Muennighoff/flan & NIv2 ### Chinese models RWKV-4-Pile-1B5-EngChn-testNovel-xxx for writing Chinese novels (trained on 200G Chinese novels.) RWKV-4-Pile-1B5-EngChn-testxxx for Chinese Q&A (trained on 10G Chinese text. only for testing purposes.) ## Note: 4 / 4a / 4b models ARE NOT compatible. Use RWKV-4 unless you know what you are doing. RWKV-4b-Pile-1B5-20230217-7954.pth (--my_testing 'a') with tiny amt of QKV attention to improve performance * Pile loss 1.9947 * LAMBADA ppl 5.82, acc 62.35% * PIQA acc 72.52% * SC2016 acc 68.89% * Hellaswag acc_norm 54.32%
bert-large-cased
[ "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
388,769
null
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.en metrics: - name: F1 type: f1 value: 0.6886160714285715 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.4043 - F1: 0.6886 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1347 | 1.0 | 50 | 0.5771 | 0.4880 | | 0.5066 | 2.0 | 100 | 0.4209 | 0.6582 | | 0.3631 | 3.0 | 150 | 0.4043 | 0.6886 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
distilbert-base-cased-distilled-squad
[ "pytorch", "tf", "rust", "safetensors", "openvino", "distilbert", "question-answering", "en", "dataset:squad", "arxiv:1910.01108", "arxiv:1910.09700", "transformers", "license:apache-2.0", "model-index", "autotrain_compatible", "has_space" ]
question-answering
{ "architectures": [ "DistilBertForQuestionAnswering" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
257,745
2022-08-20T15:57:53Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xlsr-en-demo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-en-demo This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1356 - Wer: 0.2015 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.3911 | 0.5 | 500 | 0.5397 | 0.2615 | | 0.3413 | 1.01 | 1000 | 0.1423 | 0.2137 | | 0.243 | 1.51 | 1500 | 0.1458 | 0.2210 | | 0.2232 | 2.01 | 2000 | 0.1380 | 0.2143 | | 0.162 | 2.51 | 2500 | 0.1464 | 0.2149 | | 0.1384 | 3.02 | 3000 | 0.1348 | 0.2109 | | 0.1164 | 3.52 | 3500 | 0.1324 | 0.2040 | | 0.1103 | 4.02 | 4000 | 0.1310 | 0.2051 | | 0.0857 | 4.53 | 4500 | 0.1356 | 0.2015 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.12.1+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
distilbert-base-cased
[ "pytorch", "tf", "onnx", "distilbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1910.01108", "transformers", "license:apache-2.0", "has_space" ]
null
{ "architectures": null, "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
574,859
2022-08-20T15:57:54Z
# Commit Hash 1. BART 5 epoch training: 5e2267251ec1555e81f9ed6f090e1f70355ff1c8 2. BART 10 epoch training: 2e347c5f162fe18bb8d874d2bd0b46ae3d9ff175 3. BART 13 epoch training: 58b307615eb37f44a9233318427420b330fb6cea # Dataset [link](https://huggingface.co/datasets/Adapting/Knowledge-Driven-Dialogues) # Training Results | Num of Epochs | Training Loss| Validation Loss| | :--- | :--- | :--- | | 1 | 1.569600 | 1.605180 | | 2 | 1.330900 | 1.523424 | | 3 | 1.212800 | 1.519999 | | 4 | 1.151600 | 1.520266 | | 5 | 1.094100 | 1.516026 | # Usage ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline tokenizer = AutoTokenizer.from_pretrained("Adapting/Knowledge-Driven-Dialogue") model_5 = AutoModelForSeq2SeqLM.from_pretrained("Adapting/Knowledge-Driven-Dialogue", revision = '5e2267251ec1555e81f9ed6f090e1f70355ff1c8') pipe_5 = pipeline('text2text-generation',tokenizer=tokenizer,model=model_5,truncation=True, max_length=1024) knowledge = '''[ett] 的性别是男。[ett] 的星座是天蝎座。[ett] 的代表作品是《姐姐》《孤独的人是可耻的》《爱情》《上苍保佑吃完了饭的人民》。[ett] 的出生地是湖南。[ett] 的毕业院校是陕西机械学院。[ett] 的出生日期是1968年11月17日。[ett] 的爱好是湖南菜。[ett] 的中文名是张楚。[ett] 的爱好是思考。[ett] 的别名是张红兵。[ett] 的国籍是中国。[ett] 的血型是A型。[ett] 的民族是汉。[ett] 的Information是张楚,中国大陆音乐人,原名张红兵,1968年11月生于湖南,在湖南浏阳的外婆家生活了8年,8岁时,跟随父母搬到了陕西。这些年,他走遍了中国大部分的城市,尤其是那些有自然风光的地方,有一些流浪的感觉。他大部份歌曲创作的时候都是走在路上,独自漂泊。10岁那年第一次离家出走,17岁考入原陕西机械学院,即西安理工大学,土木工程系,后又辍学。1987年只身来到北京,从此踏上了音乐之路。张楚的一首《姐姐》唱遍大江南北,并和窦唯、何勇、唐朝乐队赴香港“中国摇滚乐势力”演唱会引起极大轰动,开辟了中国摇滚的鼎盛时代,和许巍和郑钧是曾经的西安”三剑客“。。[ett] 的职业是歌''' usr = '你知道张楚的国籍吗?' input = f'[klg] {knowledge} [usr] {usr}' print(pipe_5(input)) ''' [{'generated_text': '知 道 啊 , 他 是 中 国 人 。'}] ''' ```
AaravMonkey/modelRepo
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-08-22T03:17:40Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.3229 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.556 | 1.0 | 767 | 2.3725 | | 2.4458 | 2.0 | 1534 | 2.3396 | | 2.4102 | 3.0 | 2301 | 2.3084 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Abdullaziz/model1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-08-22T07:54:52Z
--- language: zh --- # ERNIE-3.0-base-zh ## Introduction ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation More detail: https://arxiv.org/abs/2107.02137 ## Released Model Info This released pytorch model is converted from the officially released PaddlePaddle ERNIE model and a series of experiments have been conducted to check the accuracy of the conversion. - Official PaddlePaddle ERNIE repo:https://paddlenlp.readthedocs.io/zh/latest/model_zoo/transformers/ERNIE/contents.html - Pytorch Conversion repo: https://github.com/nghuyong/ERNIE-Pytorch ## How to use Then you can load ERNIE-3.0 model as before: ```Python from transformers import BertTokenizer, ErnieForMaskedLM tokenizer = BertTokenizer.from_pretrained("nghuyong/ernie-3.0-base-zh") model = ErnieForMaskedLM.from_pretrained("nghuyong/ernie-3.0-base-zh") ``` ## Citation ```bibtex @article{sun2021ernie, title={Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation}, author={Sun, Yu and Wang, Shuohuan and Feng, Shikun and Ding, Siyu and Pang, Chao and Shang, Junyuan and Liu, Jiaxiang and Chen, Xuyi and Zhao, Yanbin and Lu, Yuxiang and others}, journal={arXiv preprint arXiv:2107.02137}, year={2021} } ```
Abirate/bert_fine_tuned_cola
[ "tf", "bert", "text-classification", "arxiv:1810.04805", "transformers", "has_space" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
2022-08-22T08:52:59Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice_10_0 model-index: - name: wav2vec2-large-xls-r-300m-ja-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-ja-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_10_0 dataset. It achieves the following results on the evaluation set: - Loss: 1.1407 - Wer: 0.2456 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 637 | 5.3238 | 0.9663 | | No log | 2.0 | 1274 | 4.1785 | 0.7662 | | No log | 3.0 | 1911 | 2.3701 | 0.4983 | | No log | 4.0 | 2548 | 1.8443 | 0.4090 | | 6.5781 | 5.0 | 3185 | 1.4892 | 0.3363 | | 6.5781 | 6.0 | 3822 | 1.3229 | 0.2995 | | 6.5781 | 7.0 | 4459 | 1.2418 | 0.2814 | | 6.5781 | 8.0 | 5096 | 1.1928 | 0.2647 | | 1.0184 | 9.0 | 5733 | 1.1584 | 0.2520 | | 1.0184 | 10.0 | 6370 | 1.1407 | 0.2456 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.10.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
AdapterHub/bert-base-uncased-pf-duorc_p
[ "bert", "en", "dataset:duorc", "arxiv:2104.08247", "adapter-transformers", "question-answering" ]
question-answering
{ "architectures": null, "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Worm library_name: ml-agents --- # **ppo** Agent playing **Worm** This is a trained model of a **ppo** agent playing **Worm** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Worm 2. Step 1: Write your model_id: danieladejumo/MLAgents-Worm 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
AdapterHub/roberta-base-pf-quoref
[ "roberta", "en", "dataset:quoref", "arxiv:2104.08247", "adapter-transformers", "question-answering" ]
question-answering
{ "architectures": null, "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: brand-detector results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.8428270220756531 --- # brand-detector Brand detector is a particular case of image classification, since these may contain only text, images, or a combination of both. In this work, we trained a system for the brand classification of shoes available at https://www.shooos.com/. The method allows obtaining the most similar brands on the basis of their shape, color, business sector, semantics, general characteristics, or a combination of other features. The proposed approach is evaluated using 15% of unseen images of the dataset. The experimentation carried out attained reliable performance results, both quantitatively and qualitatively. How to test: - find on Google Images any image related to shoe brands below and test it! This model is not fully trained yet :D ## Example Images #### adidas ![adidas](images/adidas.jpg) #### arkk-copenhagen ![arkk-copenhagen](images/arkk-copenhagen.jpg) #### asics ![asics](images/asics.jpg) #### birkenstock ![birkenstock](images/birkenstock.jpg) #### bjorn-borg ![bjorn-borg](images/bjorn-borg.jpg) #### by-garment-makers ![by-garment-makers](images/by-garment-makers.jpg) #### camper ![camper](images/camper.jpg) #### carhartt-wip ![carhartt-wip](images/carhartt-wip.jpg) #### caterpillar ![caterpillar](images/caterpillar.jpg) #### champion ![champion](images/champion.jpg) #### chpo ![chpo](images/chpo.jpg) #### clae ![clae](images/clae.jpg) #### colorful-standard ![colorful-standard](images/colorful-standard.jpg) #### converse ![converse](images/converse.jpg) #### dc-shoes ![dc-shoes](images/dc-shoes.jpg) #### dedicated ![dedicated](images/dedicated.jpg) #### dickies ![dickies](images/dickies.jpg) #### doughnut ![doughnut](images/doughnut.jpg) #### dr-martens ![dr-martens](images/dr-martens.jpg) #### fila ![fila](images/fila.jpg) #### fjallraven ![fjallraven](images/fjallraven.jpg) #### hanwag ![hanwag](images/hanwag.jpg) #### happy-socks ![happy-socks](images/happy-socks.jpg) #### havaianas ![havaianas](images/havaianas.jpg) #### herschel-supply ![herschel-supply](images/herschel-supply.jpg) #### iriedaily ![iriedaily](images/iriedaily.jpg) #### keepcup ![keepcup](images/keepcup.jpg) #### lefrik ![lefrik](images/lefrik.jpg) #### loqi ![loqi](images/loqi.jpg) #### makia ![makia](images/makia.jpg) #### maloja ![maloja](images/maloja.jpg) #### new-balance ![new-balance](images/new-balance.jpg) #### new-era ![new-era](images/new-era.jpg) #### nike ![nike](images/nike.jpg) #### norba-clothing ![norba-clothing](images/norba-clothing.jpg) #### on-running ![on-running](images/on-running.jpg) #### onitsuka-tiger ![onitsuka-tiger](images/onitsuka-tiger.jpg) #### palladium ![palladium](images/palladium.jpg) #### rains ![rains](images/rains.jpg) #### reebok ![reebok](images/reebok.jpg) #### saucony ![saucony](images/saucony.jpg) #### secrid ![secrid](images/secrid.jpg) #### sneaky ![sneaky](images/sneaky.jpg) #### stance ![stance](images/stance.jpg) #### superfeet ![superfeet](images/superfeet.jpg) #### the-north-face ![the-north-face](images/the-north-face.jpg) #### timberland ![timberland](images/timberland.jpg) #### toms ![toms](images/toms.jpg) #### triwa ![triwa](images/triwa.jpg) #### under-armour ![under-armour](images/under-armour.jpg) #### vans ![vans](images/vans.jpg) #### veja ![veja](images/veja.jpg)
Adharsh2608/DialoGPT-small-harrypotter
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-08-23T04:55:16Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-chunking results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: train args: conll2003 metrics: - name: Precision type: precision value: 0.9229691876750701 - name: Recall type: recall value: 0.9217857559156079 - name: F1 type: f1 value: 0.9223770922027176 - name: Accuracy type: accuracy value: 0.961882616118208 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-chunking This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.1594 - Precision: 0.9230 - Recall: 0.9218 - F1: 0.9224 - Accuracy: 0.9619 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1887 | 1.0 | 1756 | 0.1793 | 0.9167 | 0.9112 | 0.9139 | 0.9573 | | 0.128 | 2.0 | 3512 | 0.1552 | 0.9228 | 0.9187 | 0.9207 | 0.9609 | | 0.091 | 3.0 | 5268 | 0.1594 | 0.9230 | 0.9218 | 0.9224 | 0.9619 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
AdharshJolly/HarryPotterBot-Model
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- tags: - conversational --- # Rick DialoGPT Model
AhmedHassan19/model
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 --- [Optimum Habana](https://github.com/huggingface/optimum-habana) is the interface between the Hugging Face Transformers and Diffusers libraries and Habana's Gaudi processor (HPU). It provides a set of tools enabling easy and fast model loading, training and inference on single- and multi-HPU settings for different downstream tasks. Learn more about how to take advantage of the power of Habana HPUs to train and deploy Transformers and Diffusers models at [hf.co/hardware/habana](https://huggingface.co/hardware/habana). ## Swin Transformer model HPU configuration This model only contains the `GaudiConfig` file for running the [Swin Transformer](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) model on Habana's Gaudi processors (HPU). **This model contains no model weights, only a GaudiConfig.** This enables to specify: - `use_habana_mixed_precision`: whether to use Habana Mixed Precision (HMP) - `hmp_opt_level`: optimization level for HMP, see [here](https://docs.habana.ai/en/latest/PyTorch/PyTorch_Mixed_Precision/PT_Mixed_Precision.html#configuration-options) for a detailed explanation - `hmp_bf16_ops`: list of operators that should run in bf16 - `hmp_fp32_ops`: list of operators that should run in fp32 - `hmp_is_verbose`: verbosity - `use_fused_adam`: whether to use Habana's custom AdamW implementation - `use_fused_clip_norm`: whether to use Habana's fused gradient norm clipping operator ## Usage The model is instantiated the same way as in the Transformers library. The only difference is that there are a few new training arguments specific to HPUs. [Here](https://github.com/huggingface/optimum-habana/blob/main/examples/image-classification/run_image_classification.py) is an image classification example script to fine-tune a model. You can run it with Swin with the following command: ```bash python run_image_classification.py \ --model_name_or_path microsoft/swin-base-patch4-window7-224-in22k \ --dataset_name cifar10 \ --output_dir /tmp/outputs/ \ --remove_unused_columns False \ --do_train \ --do_eval \ --learning_rate 3e-5 \ --num_train_epochs 5 \ --per_device_train_batch_size 64 \ --per_device_eval_batch_size 64 \ --evaluation_strategy epoch \ --save_strategy epoch \ --load_best_model_at_end True \ --save_total_limit 3 \ --seed 1337 \ --use_habana \ --use_lazy_mode \ --gaudi_config_name Habana/swin \ --throughput_warmup_steps 2 \ --ignore_mismatched_sizes ``` Check the [documentation](https://huggingface.co/docs/optimum/habana/index) out for more advanced usage and examples.
Amrrs/indian-foods
[ "pytorch", "tensorboard", "vit", "image-classification", "transformers", "huggingpics", "model-index", "autotrain_compatible" ]
image-classification
{ "architectures": [ "ViTForImageClassification" ], "model_type": "vit", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
33
null
--- tags: - generated_from_trainer datasets: - bc4chemd metrics: - precision - recall - f1 - accuracy model-index: - name: electramed-small-BC4CHEMD-ner results: - task: name: Token Classification type: token-classification dataset: name: bc4chemd type: bc4chemd config: bc4chemd split: train args: bc4chemd metrics: - name: Precision type: precision value: 0.7715624436835465 - name: Recall type: recall value: 0.6760888102832959 - name: F1 type: f1 value: 0.7206773498518718 - name: Accuracy type: accuracy value: 0.9770623458780496 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electramed-small-BC4CHEMD-ner This model is a fine-tuned version of [giacomomiolo/electramed_small_scivocab](https://huggingface.co/giacomomiolo/electramed_small_scivocab) on the bc4chemd dataset. It achieves the following results on the evaluation set: - Loss: 0.0655 - Precision: 0.7716 - Recall: 0.6761 - F1: 0.7207 - Accuracy: 0.9771 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0882 | 1.0 | 1918 | 0.1058 | 0.6615 | 0.3942 | 0.4940 | 0.9635 | | 0.0555 | 2.0 | 3836 | 0.0820 | 0.7085 | 0.5133 | 0.5954 | 0.9689 | | 0.0631 | 3.0 | 5754 | 0.0769 | 0.6892 | 0.5743 | 0.6266 | 0.9699 | | 0.0907 | 4.0 | 7672 | 0.0682 | 0.7623 | 0.5923 | 0.6666 | 0.9740 | | 0.0313 | 5.0 | 9590 | 0.0675 | 0.7643 | 0.6223 | 0.6860 | 0.9749 | | 0.0306 | 6.0 | 11508 | 0.0662 | 0.7654 | 0.6398 | 0.6970 | 0.9754 | | 0.0292 | 7.0 | 13426 | 0.0656 | 0.7694 | 0.6552 | 0.7077 | 0.9763 | | 0.1025 | 8.0 | 15344 | 0.0658 | 0.7742 | 0.6687 | 0.7176 | 0.9769 | | 0.0394 | 9.0 | 17262 | 0.0662 | 0.7741 | 0.6731 | 0.7201 | 0.9770 | | 0.0378 | 10.0 | 19180 | 0.0655 | 0.7716 | 0.6761 | 0.7207 | 0.9771 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Ankitha/DialoGPT-small-harrypottery
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - coscan-speech metrics: - accuracy model-index: - name: wav2vec2-base-finetuned-coscan-age_group results: - task: name: Audio Classification type: audio-classification dataset: name: Coscan Speech type: NbAiLab/coscan-speech args: no metrics: - name: Test Accuracy type: accuracy value: 0.6389183627536092 - name: Validation Accuracy type: accuracy value: 0.9983911939034716 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-coscan-age_group This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the coscan-speech dataset (`age_group`). It achieves the following results on the evaluation set: - Loss: 0.0118 - Accuracy: 0.9984 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3669 | 1.0 | 6644 | 0.1547 | 0.9449 | | 0.0662 | 2.0 | 13288 | 0.0980 | 0.9807 | | 0.0512 | 3.0 | 19932 | 0.0504 | 0.9911 | | 0.0008 | 4.0 | 26576 | 0.0302 | 0.9955 | | 0.0 | 5.0 | 33220 | 0.0118 | 0.9984 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.10.1+cu102 - Datasets 2.4.0 - Tokenizers 0.12.1
AnonymousSub/cline
[ "pytorch", "roberta", "transformers" ]
null
{ "architectures": [ "LecbertForPreTraining" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Sayan007/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Sayan007/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3181 - Validation Loss: 0.5309 - Train Matthews Correlation: 0.4539 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2670, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5156 | 0.4485 | 0.5232 | 0 | | 0.3181 | 0.5309 | 0.4539 | 1 | ### Framework versions - Transformers 4.21.2 - TensorFlow 2.7.0 - Datasets 2.3.2 - Tokenizers 0.12.1
AnonymousSub/declutr-emanuals-s10-AR
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
29
null
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/muyang/ddpm-butterflies-128/tensorboard?#scalars)
AnonymousSub/declutr-emanuals-techqa
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- tags: - generated_from_trainer datasets: - squad_modified_for_t5_qg model-index: - name: t5-end2end-questions-generation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-end2end-questions-generation This model is a fine-tuned version of [digit82/kolang-t5-base](https://huggingface.co/digit82/kolang-t5-base) on the korquad_modified_for_t5_qg dataset. It achieves the following results on the evaluation set: - Loss: 2.1449 ## Model description More information needed ## Training and evaluation data KorQuAD V1.0 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.6685 | 0.66 | 100 | 2.4355 | | 2.3957 | 1.32 | 200 | 2.2428 | | 2.1795 | 1.98 | 300 | 2.1664 | | 1.9408 | 2.65 | 400 | 2.1467 | | 1.8333 | 3.31 | 500 | 2.1470 | | 1.7319 | 3.97 | 600 | 2.1194 | | 1.6095 | 4.63 | 700 | 2.1348 | | 1.5662 | 5.3 | 800 | 2.1433 | | 1.5038 | 5.96 | 900 | 2.1319 | | 1.45 | 6.62 | 1000 | 2.1449 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
AnonymousSub/declutr-model-emanuals
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-small-finetuned-ner-to-multilabel-finer-139 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-small-finetuned-ner-to-multilabel-finer-139 This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0019 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.1398 | 0.0 | 500 | 0.0244 | | 0.0164 | 0.01 | 1000 | 0.0114 | | 0.01 | 0.01 | 1500 | 0.0084 | | 0.0081 | 0.02 | 2000 | 0.0073 | | 0.0072 | 0.02 | 2500 | 0.0068 | | 0.0069 | 0.03 | 3000 | 0.0065 | | 0.0067 | 0.03 | 3500 | 0.0063 | | 0.0066 | 0.04 | 4000 | 0.0062 | | 0.0061 | 0.04 | 4500 | 0.0062 | | 0.0069 | 0.04 | 5000 | 0.0061 | | 0.0063 | 0.05 | 5500 | 0.0061 | | 0.0062 | 0.05 | 6000 | 0.0061 | | 0.006 | 0.06 | 6500 | 0.0061 | | 0.0059 | 0.06 | 7000 | 0.0056 | | 0.0058 | 0.07 | 7500 | 0.0054 | | 0.0054 | 0.07 | 8000 | 0.0054 | | 0.0057 | 0.08 | 8500 | 0.0053 | | 0.0057 | 0.08 | 9000 | 0.0052 | | 0.0056 | 0.08 | 9500 | 0.0051 | | 0.0051 | 0.09 | 10000 | 0.0050 | | 0.0054 | 0.09 | 10500 | 0.0049 | | 0.005 | 0.1 | 11000 | 0.0048 | | 0.0049 | 0.1 | 11500 | 0.0046 | | 0.0049 | 0.11 | 12000 | 0.0046 | | 0.0046 | 0.11 | 12500 | 0.0044 | | 0.0043 | 0.12 | 13000 | 0.0043 | | 0.0045 | 0.12 | 13500 | 0.0042 | | 0.0042 | 0.12 | 14000 | 0.0042 | | 0.0042 | 0.13 | 14500 | 0.0039 | | 0.0042 | 0.13 | 15000 | 0.0038 | | 0.0039 | 0.14 | 15500 | 0.0037 | | 0.004 | 0.14 | 16000 | 0.0036 | | 0.0037 | 0.15 | 16500 | 0.0035 | | 0.0036 | 0.15 | 17000 | 0.0035 | | 0.0036 | 0.16 | 17500 | 0.0035 | | 0.0035 | 0.16 | 18000 | 0.0033 | | 0.0037 | 0.16 | 18500 | 0.0033 | | 0.0035 | 0.17 | 19000 | 0.0032 | | 0.0032 | 0.17 | 19500 | 0.0031 | | 0.0032 | 0.18 | 20000 | 0.0031 | | 0.0033 | 0.18 | 20500 | 0.0030 | | 0.003 | 0.19 | 21000 | 0.0030 | | 0.0034 | 0.19 | 21500 | 0.0029 | | 0.0031 | 0.2 | 22000 | 0.0029 | | 0.003 | 0.2 | 22500 | 0.0028 | | 0.0032 | 0.2 | 23000 | 0.0028 | | 0.003 | 0.21 | 23500 | 0.0027 | | 0.0029 | 0.21 | 24000 | 0.0027 | | 0.0027 | 0.22 | 24500 | 0.0026 | | 0.0029 | 0.22 | 25000 | 0.0026 | | 0.0027 | 0.23 | 25500 | 0.0026 | | 0.0028 | 0.23 | 26000 | 0.0026 | | 0.0027 | 0.24 | 26500 | 0.0025 | | 0.0026 | 0.24 | 27000 | 0.0025 | | 0.0026 | 0.24 | 27500 | 0.0025 | | 0.0026 | 0.25 | 28000 | 0.0024 | | 0.0025 | 0.25 | 28500 | 0.0024 | | 0.0026 | 0.26 | 29000 | 0.0024 | | 0.0025 | 0.26 | 29500 | 0.0024 | | 0.0024 | 0.27 | 30000 | 0.0024 | | 0.0026 | 0.27 | 30500 | 0.0023 | | 0.0024 | 0.28 | 31000 | 0.0023 | | 0.0025 | 0.28 | 31500 | 0.0023 | | 0.0024 | 0.28 | 32000 | 0.0023 | | 0.0023 | 0.29 | 32500 | 0.0022 | | 0.0024 | 0.29 | 33000 | 0.0022 | | 0.0024 | 0.3 | 33500 | 0.0022 | | 0.0022 | 0.3 | 34000 | 0.0022 | | 0.0023 | 0.31 | 34500 | 0.0021 | | 0.0023 | 0.31 | 35000 | 0.0021 | | 0.0024 | 0.32 | 35500 | 0.0021 | | 0.0023 | 0.32 | 36000 | 0.0021 | | 0.0023 | 0.32 | 36500 | 0.0021 | | 0.0021 | 0.33 | 37000 | 0.0021 | | 0.0021 | 0.33 | 37500 | 0.0021 | | 0.0022 | 0.34 | 38000 | 0.0021 | | 0.0022 | 0.34 | 38500 | 0.0020 | | 0.0022 | 0.35 | 39000 | 0.0020 | | 0.0022 | 0.35 | 39500 | 0.0020 | | 0.0022 | 0.36 | 40000 | 0.0022 | | 0.0022 | 0.36 | 40500 | 0.0020 | | 0.0022 | 0.36 | 41000 | 0.0020 | | 0.0021 | 0.37 | 41500 | 0.0020 | | 0.0022 | 0.37 | 42000 | 0.0020 | | 0.0021 | 0.38 | 42500 | 0.0020 | | 0.0021 | 0.38 | 43000 | 0.0019 | | 0.0022 | 0.39 | 43500 | 0.0019 | | 0.002 | 0.39 | 44000 | 0.0019 | | 0.0021 | 0.4 | 44500 | 0.0020 | | 0.0022 | 0.4 | 45000 | 0.0019 | | 0.0022 | 0.4 | 45500 | 0.0019 | | 0.002 | 0.41 | 46000 | 0.0019 | | 0.0018 | 0.41 | 46500 | 0.0019 | | 0.0022 | 0.42 | 47000 | 0.0019 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
AnonymousSub/declutr-model_wikiqa
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: silviacamplani/distilbert-finetuned-ner-ai results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # silviacamplani/distilbert-finetuned-ner-ai This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.8962 - Validation Loss: 0.9088 - Train Precision: 0.3895 - Train Recall: 0.3901 - Train F1: 0.3898 - Train Accuracy: 0.7558 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 350, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 2.5761 | 1.7934 | 0.0 | 0.0 | 0.0 | 0.6480 | 0 | | 1.7098 | 1.5860 | 0.0 | 0.0 | 0.0 | 0.6480 | 1 | | 1.4692 | 1.3213 | 0.0 | 0.0 | 0.0 | 0.6480 | 2 | | 1.2755 | 1.1859 | 0.1154 | 0.0460 | 0.0658 | 0.6789 | 3 | | 1.1561 | 1.0921 | 0.2878 | 0.2010 | 0.2367 | 0.7192 | 4 | | 1.0652 | 1.0170 | 0.3250 | 0.2862 | 0.3043 | 0.7354 | 5 | | 0.9936 | 0.9649 | 0.3489 | 0.3305 | 0.3395 | 0.7462 | 6 | | 0.9442 | 0.9340 | 0.3845 | 0.3799 | 0.3822 | 0.7549 | 7 | | 0.9097 | 0.9168 | 0.3866 | 0.3748 | 0.3806 | 0.7556 | 8 | | 0.8962 | 0.9088 | 0.3895 | 0.3901 | 0.3898 | 0.7558 | 9 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.6.4 - Datasets 2.1.0 - Tokenizers 0.12.1
AnonymousSub/declutr-roberta-papers
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: opus-mt-tc-big-wiki-en-ko results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-tc-big-wiki-en-ko This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.3008 - Bleu: 7.8420 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 22 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
AnonymousSub/dummy_2_parent
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: distilrubert-tiny-cased-conversational-v1_empathy_preprocessed_punct_lowercasing results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilrubert-tiny-cased-conversational-v1_empathy_preprocessed_punct_lowercasing This model is a fine-tuned version of [DeepPavlov/distilrubert-tiny-cased-conversational-v1](https://huggingface.co/DeepPavlov/distilrubert-tiny-cased-conversational-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6036 - Accuracy: 0.7458 - F1: 0.7409 - Precision: 0.7420 - Recall: 0.7458 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=0.0001 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 1.0953 | 1.0 | 9 | 1.0692 | 0.4661 | 0.3740 | 0.3740 | 0.4661 | | 1.066 | 2.0 | 18 | 1.0242 | 0.5593 | 0.5491 | 0.5446 | 0.5593 | | 1.0119 | 3.0 | 27 | 0.9259 | 0.6102 | 0.6106 | 0.6147 | 0.6102 | | 0.9118 | 4.0 | 36 | 0.8659 | 0.5847 | 0.5349 | 0.5835 | 0.5847 | | 0.8921 | 5.0 | 45 | 0.7925 | 0.6356 | 0.6133 | 0.6275 | 0.6356 | | 0.83 | 6.0 | 54 | 0.7776 | 0.6271 | 0.6087 | 0.6199 | 0.6271 | | 0.8015 | 7.0 | 63 | 0.7675 | 0.6695 | 0.6601 | 0.6871 | 0.6695 | | 0.7334 | 8.0 | 72 | 0.7133 | 0.6780 | 0.6659 | 0.6748 | 0.6780 | | 0.696 | 9.0 | 81 | 0.6939 | 0.6864 | 0.6758 | 0.6833 | 0.6864 | | 0.6349 | 10.0 | 90 | 0.6555 | 0.7119 | 0.7057 | 0.7085 | 0.7119 | | 0.6482 | 11.0 | 99 | 0.6585 | 0.7288 | 0.7202 | 0.7339 | 0.7288 | | 0.5924 | 12.0 | 108 | 0.6223 | 0.7373 | 0.7332 | 0.7343 | 0.7373 | | 0.5437 | 13.0 | 117 | 0.6364 | 0.7288 | 0.7231 | 0.7296 | 0.7288 | | 0.5653 | 14.0 | 126 | 0.6158 | 0.7373 | 0.7266 | 0.7342 | 0.7373 | | 0.5314 | 15.0 | 135 | 0.6104 | 0.7458 | 0.7439 | 0.7435 | 0.7458 | | 0.4912 | 16.0 | 144 | 0.6119 | 0.7458 | 0.7433 | 0.7442 | 0.7458 | | 0.4819 | 17.0 | 153 | 0.6040 | 0.7458 | 0.7452 | 0.7448 | 0.7458 | | 0.4873 | 18.0 | 162 | 0.6113 | 0.7288 | 0.7248 | 0.7275 | 0.7288 | | 0.4729 | 19.0 | 171 | 0.6035 | 0.7373 | 0.7292 | 0.7341 | 0.7373 | | 0.4654 | 20.0 | 180 | 0.6036 | 0.7458 | 0.7409 | 0.7420 | 0.7458 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
AnonymousSub/roberta-base_squad2.0
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: mit tags: - speech - text - cross-modal - unified model - self-supervised learning - SpeechT5 - Voice Conversion datasets: - CMUARCTIC - bdl - clb - rms - slt --- ## SpeechT5 VC Manifest | [**Github**](https://github.com/microsoft/SpeechT5) | [**Huggingface**](https://huggingface.co/mechanicalsea/speecht5-vc) | This manifest is an attempt to recreate the Voice Conversion recipe used for training [SpeechT5](https://aclanthology.org/2022.acl-long.393). This manifest was constructed using [CMU ARCTIC](http://www.festvox.org/cmu_arctic/) four speakers, e.g., bdl, clb, rms, slt. There are 932 utterances for training, 100 utterances for validation, and 100 utterance for evaluation. ### News - 8 February 2023: SpeechT5 is integrated as an official model into the Hugging Face Transformers library [[Blog](https://huggingface.co/blog/speecht5)] and [[Demo](https://huggingface.co/spaces/Matthijs/speecht5-vc-demo)]. ### Requirements - [SpeechBrain](https://github.com/speechbrain/speechbrain) for extracting speaker embedding - [Parallel WaveGAN](https://github.com/kan-bayashi/ParallelWaveGAN) for implementing vocoder. ### Tools - `manifest/utils` is used to extract speaker embedding, generate manifest, and apply vocoder. - `manifest/arctic*` provides the pre-trained vocoder for each speaker. ### Model and Samples - [`speecht5_vc.pt`](./speecht5_vc.pt) are reimplemented Voice Conversion fine-tuning on the released manifest **but with a smaller batch size or max updates** (Ensure the manifest is ok). - `samples` are created by the released fine-tuned model and vocoder. ### Reference If you find our work is useful in your research, please cite the following paper: ```bibtex @inproceedings{ao-etal-2022-speecht5, title = {{S}peech{T}5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing}, author = {Ao, Junyi and Wang, Rui and Zhou, Long and Wang, Chengyi and Ren, Shuo and Wu, Yu and Liu, Shujie and Ko, Tom and Li, Qing and Zhang, Yu and Wei, Zhihua and Qian, Yao and Li, Jinyu and Wei, Furu}, booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)}, month = {May}, year = {2022}, pages={5723--5738}, } ```
AnonymousSub/rule_based_bert_hier_diff_equal_wts_epochs_1_shard_10
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
2022-08-25T09:32:52Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-small-finetuned-ner-to-multilabel-finer-19 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-small-finetuned-ner-to-multilabel-finer-19 This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1389 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.208 | 0.03 | 500 | 0.1137 | | 0.1026 | 0.06 | 1000 | 0.1170 | | 0.0713 | 0.1 | 1500 | 0.1301 | | 0.0567 | 0.13 | 2000 | 0.1389 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
AnonymousSub/rule_based_bert_triplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - autotrain - text-classification language: - unk widget: - text: "AA sequence with spaces" datasets: - dav3794/autotrain-data-demo-knots2 co2_eq_emissions: emissions: 0.021557396511961088 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Dataset: 1:1 (unknotted : knotted) - Model ID: 1315550258 - CO2 Emissions (in grams): 0.0216 ## Validation Metrics - Loss: 0.391 - Accuracy: 0.833 - Precision: 0.836 - Recall: 0.823 - AUC: 0.900 - F1: 0.829 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/dav3794/autotrain-demo-knots2-1315550258 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("dav3794/autotrain-demo-knots2-1315550258", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("dav3794/autotrain-demo-knots2-1315550258", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
AnonymousSub/rule_based_bert_triplet_epochs_1_shard_1_wikiqa
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-small-finetuned-ner-to-multilabel-finer-50 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-small-finetuned-ner-to-multilabel-finer-50 This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0716 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.1739 | 0.02 | 500 | 0.0691 | | 0.1018 | 0.04 | 1000 | 0.0699 | | 0.0835 | 0.06 | 1500 | 0.0718 | | 0.0667 | 0.08 | 2000 | 0.0716 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
AnonymousSub/rule_based_bert_triplet_epochs_1_shard_1_wikiqa_copy
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Retrial9842/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
AnonymousSub/rule_based_hier_quadruplet_0.1_epochs_1_shard_1_squad2.0
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-small-finetuned-ner-to-multilabel-wnut-17-new results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-small-finetuned-ner-to-multilabel-wnut-17-new This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2039 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2006 | 1.18 | 500 | 0.2043 | | 0.1247 | 2.35 | 1000 | 0.1960 | | 0.0935 | 3.53 | 1500 | 0.1893 | | 0.0742 | 4.71 | 2000 | 0.2003 | | 0.0552 | 5.88 | 2500 | 0.2106 | | 0.0405 | 7.06 | 3000 | 0.2039 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_1_squad2.0
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Retrial9842/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_1_wikiqa
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-small-finetuned-ner-to-multilabel-xglue-ner-new results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-small-finetuned-ner-to-multilabel-xglue-ner-new This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0525 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.1842 | 0.28 | 500 | 0.0937 | | 0.0858 | 0.57 | 1000 | 0.0666 | | 0.07 | 0.85 | 1500 | 0.0586 | | 0.0535 | 1.14 | 2000 | 0.0482 | | 0.0413 | 1.42 | 2500 | 0.0541 | | 0.0445 | 1.71 | 3000 | 0.0451 | | 0.0378 | 1.99 | 3500 | 0.0531 | | 0.0248 | 2.28 | 4000 | 0.0501 | | 0.0255 | 2.56 | 4500 | 0.0525 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
AnonymousSub/rule_based_hier_triplet_epochs_1_shard_1_squad2.0
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - dav3794/autotrain-data-demo-knots3 co2_eq_emissions: emissions: 0.03305239439397985 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1315750263 - CO2 Emissions (in grams): 0.0331 ## Validation Metrics - Loss: 0.345 - Accuracy: 0.880 - Precision: 0.894 - Recall: 0.955 - AUC: 0.888 - F1: 0.923 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/dav3794/autotrain-demo-knots3-1315750263 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("dav3794/autotrain-demo-knots3-1315750263", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("dav3794/autotrain-demo-knots3-1315750263", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
AnonymousSub/rule_based_hier_triplet_epochs_1_shard_1_wikiqa
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: camembert-base-finetuned-Train_RAW15-dd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # camembert-base-finetuned-Train_RAW15-dd This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3634 - Precision: 0.8788 - Recall: 0.9009 - F1: 0.8897 - Accuracy: 0.9256 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0899 | 1.0 | 12043 | 0.3289 | 0.8642 | 0.8996 | 0.8815 | 0.9156 | | 0.0756 | 2.0 | 24086 | 0.3634 | 0.8788 | 0.9009 | 0.8897 | 0.9256 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
AnonymousSub/rule_based_hier_triplet_epochs_1_shard_1_wikiqa_copy
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - dav3794/autotrain-data-demo-knots-all co2_eq_emissions: emissions: 0.1285808899475734 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1315850267 - CO2 Emissions (in grams): 0.1286 ## Validation Metrics - Loss: 0.085 - Accuracy: 0.982 - Precision: 0.984 - Recall: 0.997 - AUC: 0.761 - F1: 0.991 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/dav3794/autotrain-demo-knots-all-1315850267 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("dav3794/autotrain-demo-knots-all-1315850267", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("dav3794/autotrain-demo-knots-all-1315850267", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
AnonymousSub/rule_based_only_classfn_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - dav3794/autotrain-data-demo-knots-1-2 co2_eq_emissions: emissions: 0.019866640922183956 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1315950270 - CO2 Emissions (in grams): 0.0199 ## Validation Metrics - Loss: 0.396 - Accuracy: 0.792 - Precision: 0.915 - Recall: 0.652 - AUC: 0.900 - F1: 0.761 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/dav3794/autotrain-demo-knots-1-2-1315950270 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("dav3794/autotrain-demo-knots-1-2-1315950270", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("dav3794/autotrain-demo-knots-1-2-1315950270", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
AnonymousSub/rule_based_only_classfn_epochs_1_shard_1_wikiqa
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
2022-08-25T11:45:25Z
--- license: mit tags: - generated_from_trainer datasets: - turkish-multiclass-dataset metrics: - f1 model-index: - name: distilbert-base-turkish-cased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: turkish-multiclass-dataset type: turkish-multiclass-dataset config: TurkishMulticlassDataset split: train args: TurkishMulticlassDataset metrics: - name: F1 type: f1 value: f1: 0.8276613385259164 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-turkish-cased-finetuned-emotion This model is a fine-tuned version of [dbmdz/distilbert-base-turkish-cased](https://huggingface.co/dbmdz/distilbert-base-turkish-cased) on the turkish-multiclass-dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.4861 - F1: {'f1': 0.8276613385259164} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------------------------:| | 0.2578 | 1.0 | 313 | 0.5459 | {'f1': 0.8212239281513611} | | 0.381 | 2.0 | 626 | 0.4861 | {'f1': 0.8276613385259164} | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
AnonymousSub/rule_based_only_classfn_twostage_epochs_1_shard_1_wikiqa
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
2022-08-25T11:51:12Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-najianews results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-najianews This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3788 - Accuracy: 0.9075 - F1: 0.9074 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.4709 | 1.0 | 249 | 0.3247 | 0.8933 | 0.8898 | | 0.2174 | 2.0 | 498 | 0.3848 | 0.9004 | 0.8952 | | 0.1444 | 3.0 | 747 | 0.3788 | 0.9075 | 0.9074 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
AnonymousSub/rule_based_roberta_bert_quadruplet_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2022-08-25T11:56:48Z
--- license: mit --- Three classes sentiment analysis (positive, negative, neutral) Based on https://huggingface.co/j-hartmann/sentiment-roberta-large-english-3-classes Fine-tuned using: - annotated sentences from book reviews in English https://www.gti.uvigo.es/index.php/en/book-reviews-annotated-dataset-for-aspect-based-sentiment-analysis - annotated paragraphs from amateur writers' stories https://arxiv.org/abs/1910.11769 Performance for books: ![hartmann_ft_test_books.png](https://s3.amazonaws.com/moonup/production/uploads/1661430297130-611a6e0f289467cafea62d0e.png) Performance for reviews: ![hartmann_ft_test_reviews.png](https://s3.amazonaws.com/moonup/production/uploads/1661430338990-611a6e0f289467cafea62d0e.png)
AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- language: - fa tags: - Wikipedia - Summarizer - bert2bert - Summarization task_categories: - Summarization - text generation task_ids: - news-articles-summarization license: - apache-2.0 multilinguality: - monolingual datasets: - pn-summary - XL-Sum metrics: - rouge-1 - rouge-2 - rouge-l --- # WikiBert2WikiBert Bert language models can be employed for Summarization tasks. WikiBert2WikiBert is an encoder-decoder transformer model that is initialized using the Persian WikiBert Model weights. The WikiBert Model is a Bert language model which is fine-tuned on Persian Wikipedia. After using the WikiBert weights for initialization, the model is trained for five epochs on PN-summary and Persian BBC datasets. ## How to Use: You can use the code below to get the model's outputs, or you can simply use the demo on the right. ``` from transformers import ( BertTokenizerFast, EncoderDecoderConfig, EncoderDecoderModel, BertConfig ) model_name = 'Arashasg/WikiBert2WikiBert' tokenizer = BertTokenizerFast.from_pretrained(model_name) config = EncoderDecoderConfig.from_pretrained(model_name) model = EncoderDecoderModel.from_pretrained(model_name, config=config) def generate_summary(text): inputs = tokenizer(text, padding="max_length", truncation=True, max_length=512, return_tensors="pt") input_ids = inputs.input_ids.to("cuda") attention_mask = inputs.attention_mask.to("cuda") outputs = model.generate(input_ids, attention_mask=attention_mask) output_str = tokenizer.batch_decode(outputs, skip_special_tokens=True) return output_str input = 'your input comes here' summary = generate_summary(input) ``` ## Evaluation I separated 5 percent of the pn-summary for evaluation of the model. The rouge scores of the model are as follows: | Rouge-1 | Rouge-2 | Rouge-l | | ------------- | ------------- | ------------- | | 38.97% | 18.42% | 34.50% |
AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_1_squad2.0
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - endpoints-template library_name: generic --- # Shareded fp16 copy of [EleutherAI/gpt-j-6B](https://huggingface.co/EleutherAI/gpt-j-6B) > This is fork of [EleutherAI/gpt-j-6B](https://huggingface.co/EleutherAI/gpt-j-6B) with shareded fp16 weights implementing a custom `handler.py` as an example for how to use `gpt-j` [inference-endpoints](https://hf.co/inference-endpoints)
AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_1_wikiqa
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # teven/all_bs192_hardneg This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('teven/all_bs192_hardneg') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=teven/all_bs192_hardneg) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 650690 with parameters: ``` {'batch_size': 24, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 262920 with parameters: ``` {'batch_size': 24, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 250014 with parameters: ``` {'batch_size': 24, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 2000, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
AnonymousSub/rule_based_roberta_hier_quadruplet_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- language: en license: mit tags: - vision - video-classification model-index: - name: nielsr/xclip-base-patch32 results: - task: type: video-classification dataset: name: Kinetics 400 type: kinetics-400 metrics: - type: top-1 accuracy value: 80.4 - type: top-5 accuracy value: 95.0 --- # X-CLIP (base-sized model) X-CLIP model (base-sized, patch resolution of 32) trained fully-supervised on [Kinetics-400](https://www.deepmind.com/open-source/kinetics). It was introduced in the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Ni et al. and first released in [this repository](https://github.com/microsoft/VideoX/tree/master/X-CLIP). This model was trained using 8 frames per video, at a resolution of 224x224. Disclaimer: The team releasing X-CLIP did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description X-CLIP is a minimal extension of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip) for general video-language understanding. The model is trained in a contrastive way on (video, text) pairs. ![X-CLIP architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/xclip_architecture.png) This allows the model to be used for tasks like zero-shot, few-shot or fully supervised video classification and video-text retrieval. ## Intended uses & limitations You can use the raw model for determining how well text goes with a given video. See the [model hub](https://huggingface.co/models?search=microsoft/xclip) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/xclip.html#). ## Training data This model was trained on [Kinetics-400](https://www.deepmind.com/open-source/kinetics). ### Preprocessing The exact details of preprocessing during training can be found [here](https://github.com/microsoft/VideoX/blob/40f6d177e0a057a50ac69ac1de6b5938fd268601/X-CLIP/datasets/build.py#L247). The exact details of preprocessing during validation can be found [here](https://github.com/microsoft/VideoX/blob/40f6d177e0a057a50ac69ac1de6b5938fd268601/X-CLIP/datasets/build.py#L285). During validation, one resizes the shorter edge of each frame, after which center cropping is performed to a fixed-size resolution (like 224x224). Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation. ## Evaluation results This model achieves a top-1 accuracy of 80.4% and a top-5 accuracy of 95.0%.
AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1_wikiqa
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
23
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8666666666666667 - name: F1 type: f1 value: 0.8675496688741722 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3069 - Accuracy: 0.8667 - F1: 0.8675 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
ArcQ/gpt-experiments
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-08-26T00:43:39Z
--- tags: - t5 --- # Model Card for diabetes-t5-small # Model Details ## Model Description - **Developed by:** UCI NLP - **Shared by [Optional]:** More information needed - **Model type:** Text2Text Generation - **Language(s) (NLP):** More information needed - **License:** More information needed - **Related Models:** T5-small - **Parent Model:** T5 - **Resources for more information:** - [Associated Paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) - [Google's T5 Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) - [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer) - [Hugging Face T5 Docs](https://huggingface.co/docs/transformers/model_doc/t5) # Uses ## Direct Use This model can be used for the task of Text2Text Generation ## Downstream Use [Optional] More information needed ## Out-of-Scope Use The model should not be used to intentionally create hostile or alienating environments for people. # Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. # Training Details ## Training Data The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5. The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**. See the [T5-small model card](https://huggingface.co/t5-small?text=My+name+is+Wolfgang+and+I+live+in+Berlin) for further training data details. ## Training Procedure ### Preprocessing More information needed ### Speeds, Sizes, Times More information needed # Evaluation ## Testing Data, Factors & Metrics ### Testing Data More information needed ### Factors ### Metrics More information needed ## Results More information needed # Model Examination More information needed # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** More information needed - **Hours used:** More information needed - **Cloud Provider:** More information needed - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Technical Specifications [optional] ## Model Architecture and Objective More information needed ## Compute Infrastructure More information needed ### Hardware More information needed ### Software More information needed # Citation **BibTeX:** ```bibtex @article{2020t5, author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu}, title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer}, journal = {Journal of Machine Learning Research}, year = {2020}, volume = {21}, number = {140}, pages = {1-67}, url = {http://jmlr.org/papers/v21/20-074.html} } ``` **APA:** - Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67. # Glossary [optional] More information needed # More Information [optional] More information needed # Model Card Authors [optional] UCI NLPin collaboration with Ezi Ozoani and the Hugging Face team # Model Card Contact More information needed # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("ucinlp/diabetes-t5-small") model = AutoModelForSeq2SeqLM.from_pretrained("ucinlp/diabetes-t5-small") ``` </details>
AriakimTaiyo/DialoGPT-cultured-Kumiko
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
Access to model Sandeepanie/clinical-finetuned-data is restricted and you are not in the authorized list. Visit https://huggingface.co/Sandeepanie/clinical-finetuned-data to ask for access.
AriakimTaiyo/kumiko
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - an4 license: cc-by-4.0 --- ## ESPnet2 ASR model ### `jkang/espnet2_an4_transformer` This model was trained by jaekookang using an4 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout c8f11ef7f5c571fbcc34d53da449353bd75037ce pip install -e . cd egs2/an4/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model jkang/espnet2_an4_transformer ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Fri Aug 19 17:38:46 KST 2022` - python version: `3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0]` - espnet version: `espnet 202207` - pytorch version: `pytorch 1.10.1` - Git hash: `c8f11ef7f5c571fbcc34d53da449353bd75037ce` - Commit date: `Fri Aug 19 17:20:13 2022 +0900` ## asr_train_asr_transformer_raw_en_bpe30_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.ave/test|130|773|92.0|5.8|2.2|0.4|8.4|33.1| |decode_asr_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.ave/train_dev|100|591|89.5|7.3|3.2|0.5|11.0|41.0| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.ave/test|130|2565|96.3|1.1|2.6|0.6|4.3|33.1| |decode_asr_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.ave/train_dev|100|1915|94.1|1.9|4.0|0.4|6.3|41.0| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.ave/test|130|2695|96.4|1.1|2.5|0.6|4.1|33.1| |decode_asr_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.ave/train_dev|100|2015|94.4|1.8|3.8|0.3|6.0|41.0| ## ASR config <details><summary>expand</summary> ``` config: conf/train_asr_transformer.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_transformer_raw_en_bpe30_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 4 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 43015 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 200 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 64 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_en_bpe30_sp/train/speech_shape - exp/asr_stats_raw_en_bpe30_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_en_bpe30_sp/valid/speech_shape - exp/asr_stats_raw_en_bpe30_sp/valid/text_shape.bpe batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_nodev_sp/wav.scp - speech - sound - - dump/raw/train_nodev_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/train_dev/wav.scp - speech - sound - - dump/raw/train_dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.001 scheduler: warmuplr scheduler_conf: warmup_steps: 2500 token_list: - <blank> - <unk> - ▁ - T - E - O - R - Y - A - H - U - S - I - F - B - L - P - D - G - M - C - V - X - J - K - Z - W - N - Q - <sos/eos> init: xavier_uniform input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: null zero_infinity: true joint_net_conf: null use_preprocessor: true token_type: bpe bpemodel: data/en_token_list/bpe_unigram30/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' short_noise_thres: 0.5 frontend: default frontend_conf: fs: 16k specaug: null specaug_conf: {} normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_en_bpe30_sp/train/feats_stats.npz model: espnet model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false preencoder: null preencoder_conf: {} encoder: transformer encoder_conf: output_size: 256 attention_heads: 4 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.0 input_layer: conv2d normalize_before: true postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 4 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.0 src_attention_dropout_rate: 0.0 required: - output_dir - token_list version: '202207' distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Aries/T5_question_answering
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
5
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - nubes metrics: - precision - recall - f1 - accuracy model-index: - name: Negation_Scope_Detection_NubEs_Spanish_mBERT_fine_tuned results: - task: name: Token Classification type: token-classification dataset: name: nubes type: nubes config: Nubes split: train args: Nubes metrics: - name: Precision type: precision value: 0.9012345679012346 - name: Recall type: recall value: 0.9184315370098554 - name: F1 type: f1 value: 0.909751791463288 - name: Accuracy type: accuracy value: 0.9743603468579382 widget: - text: "Paciente con diagnóstico de ELA en abril de 2015 que presenta desde hace más de dos meses disfagia progresiva, para líquidos preferentemente, con dos neumonías por aspiración, por lo que se programa ingreso para colocación de sonda de gastrostomía, realizándose el día 31 de diciembre, sin complicaciones y tolerando posteriormente la dieta por gastrostomía." --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Negation_Scope_Detection_NubEs_Spanish_mBERT_fine_tuned This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the nubes dataset. It achieves the following results on the evaluation set: - Loss: 0.1624 - Precision: 0.9012 - Recall: 0.9184 - F1: 0.9098 - Accuracy: 0.9744 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1802 | 1.0 | 1726 | 0.1849 | 0.7843 | 0.8526 | 0.8170 | 0.9509 | | 0.1216 | 2.0 | 3452 | 0.1512 | 0.8706 | 0.8352 | 0.8525 | 0.9579 | | 0.0817 | 3.0 | 5178 | 0.1083 | 0.8845 | 0.9038 | 0.8940 | 0.9710 | | 0.0517 | 4.0 | 6904 | 0.1314 | 0.8858 | 0.8960 | 0.8909 | 0.9693 | | 0.0265 | 5.0 | 8630 | 0.1514 | 0.8963 | 0.9079 | 0.9021 | 0.9721 | | 0.0136 | 6.0 | 10356 | 0.1524 | 0.9045 | 0.9092 | 0.9068 | 0.9729 | | 0.0045 | 7.0 | 12082 | 0.1624 | 0.9012 | 0.9184 | 0.9098 | 0.9744 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
ArnaudPannatier/MLPMixer
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit --- # All Best Fine-Tuned Models Best fine-tuned t5 parsing models from TalkToModel [https://arxiv.org/abs/2207.04154](https://arxiv.org/abs/2207.04154)
Arnold/wav2vec2-hausa2-demo-colab
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- language: sv license: mit tags: - summarization datasets: - Gabriel/cnn_daily_swe widget: - text: 'Frankrike lås Sebastien Chabal har nämnts för en farlig tackling på Englands Simon Shaw under lördagens VM semifinal i Paris. Simon Shaw lastar av trots att Raphael Ibanez, vänster, och Sebastien Chabal. Sale Sharks framåt kommer att ställas inför en disciplinär utfrågning på måndag efter hans tackling på motsatt andra-rower Shaw noterades genom att citera kommissionär Dennis Wheelahan. Chabal började matchen på ersättningsbänken, men kom i 26: e minuten att ersätta den skadade Fabien Pelous under värd Frankrikes 14-9 nederlag. Om han blir avstängd missar Chabal fredagens tredje och fjärde match på Parc des Princes. Samtidigt, Frankrike tränare Bernard Laporte sade att nederlaget var svårare att ta än Englands 24-7 seger i 2003 semifinalen. "År 2003 var de bättre än oss. I själva verket var de bättre än alla", sade Laporte, som lämnar sin roll att tillträda posten som junior idrottsminister i den franska regeringen. "De var som Nya Zeeland i denna turnering - favoriten, förutom att de gick hela vägen. Den här gången är det svårare för igår var det 50-50." Samtidigt, England -- försöker bli den första nationen att försvara VM-titeln -- avslöjade att stjärna kicker Jonny Wilkinson återigen hade problem med matchbollarna under semifinalen. Flughalvan, som uttryckte sin oro efter att ha kämpat med stöveln mot Australien, avvisade en boll innan han sparkade en vital trepoängare mot Frankrike. "Vi sa det inte förra veckan men en icke-match bollen kom ut på fältet i Marseille som Jonny sparkade," chef för rugby Rob Andrew sade. "Han tänkte inte på det när han sparkade det. Matchbollarna är märkta, numrerade ett till sex. Igår kväll hade de "World Cup semifinal England vs Frankrike" skrivet på dem. På matchkvällen var Jonny vaksam när han sparkade för mål att de faktiskt var matchbollar han sparkade. "Träningsbollarna förlorar tryck och form. Hela frågan förra veckan, arrangörerna accepterade alla sex matchbollar bör användas av båda sidor på torsdagen före matchen. " E-post till en vän.' inference: parameters: temperature: 0.7 min_length: 30 max_length: 120 train-eval-index: - config: Gabriel--xsum_swe task: summarization task_id: summarization splits: eval_split: test col_mapping: document: text summary: target co2_eq_emissions: emissions: 0.0334 source: Google Colab training_type: fine-tuning geographical_location: Fredericia, Denmark hardware_used: Tesla P100-PCIE-16GB model-index: - name: bart-base-cnn-swe results: - task: type: summarization name: summarization dataset: name: Gabriel/cnn_daily_swe type: Gabriel/cnn_daily_swe split: validation metrics: - type: rouge-1 value: 22.2046 name: Validation ROGUE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2U4YjhhNzhjYmQ2ODY2YWU1NzY5Y2RkYzIzNTUxOTA0MmUwZjNmYTdjNmE5OWNkODhhYWExOGNkOGJkNTBkZiIsInZlcnNpb24iOjF9.lEMnUJOeoa5LepNmjsRflkQmtJAaVo03ocSs9JJuxbLeu4x0oY-XsML3O1IfDJQuvlO4WHZviykRLabkhTtbBQ - type: rouge-2 value: 10.4332 name: Validation ROGUE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODkzOGYxNTNiMjc1MGIyODAzY2Q2MjA0NTg1N2NjODc5YWZlZGI5Y2Q3ZDAwNTQyMjA4MmJjOGUxYzVlOGFlNyIsInZlcnNpb24iOjF9.lpn8VP1_AvHXvyYDJHEfMoIpb-_B5fXvMaMQ249Tngo3HBPrexPmhEvGqj1HVnVGxVFyDFhF2tYh4AhThguoBQ - type: rouge-l value: 18.1753 name: Validation ROGUE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWRkM2ViZjRmYmNkZTI1NWE1ODUzNGMwZTYzNzU1Yjk1ODM3NDNkZWMwMjc1ZGJmZGZkNWQxYThmN2ZiZDhjZCIsInZlcnNpb24iOjF9.6ENOPrpZ245V0jcZtiSOmWwdr06W9prPAyE9Qjnn5meiE7yoc0T0oquE9d8SLOfFqYcIbCb6vVUSVxFewj_VAA - type: rouge-l-sum value: 20.846 name: Validation ROGUE-L-SUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDJjODc0MTQwYTM5OGRmMjE2ODUxNGM4YmYxMTJlNDE1MzI0M2Q2ZGJkZDlkOWE2NTMxNjI0YjZjMDQwYjNjNyIsInZlcnNpb24iOjF9.FvNGRVWTCJSafucQKp3eW1B__SAHCL7qHBzooe8ufYIijnNuope0W7AIphex1WMT_9o_Unni2vPGFUvT2o9qAg --- # bart-base-cnn-swe This model is a W.I.P ## Model description BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. This model is a fine-tuned version of [KBLab/bart-base-swedish-cased](https://huggingface.co/KBLab/bart-base-swedish-cased) on the [Gabriel/bart-base-cnn-swe](https://huggingface.co/datasets/Gabriel/cnn_daily_swe) dataset and can be used for summarization tasks. ## Intended uses & limitations This model should only be used to fine-tune further on and summarization tasks. ```python from transformers import pipeline summarizer = pipeline("summarization", model="Gabriel/bart-base-cnn-swe") ARTICLE = """ Frankrike lås Sebastien Chabal har nämnts för en farlig tackling på Englands Simon Shaw under lördagens VM semifinal i Paris. Simon Shaw lastar av trots att Raphael Ibanez, vänster, och Sebastien Chabal. Sale Sharks framåt kommer att ställas inför en disciplinär utfrågning på måndag efter hans tackling på motsatt andra-rower Shaw noterades genom att citera kommissionär Dennis Wheelahan. Chabal började matchen på ersättningsbänken, men kom i 26: e minuten att ersätta den skadade Fabien Pelous under värd Frankrikes 14-9 nederlag. Om han blir avstängd missar Chabal fredagens tredje och fjärde match på Parc des Princes. Samtidigt, Frankrike tränare Bernard Laporte sade att nederlaget var svårare att ta än Englands 24-7 seger i 2003 semifinalen. "År 2003 var de bättre än oss. I själva verket var de bättre än alla", sade Laporte, som lämnar sin roll att tillträda posten som junior idrottsminister i den franska regeringen. "De var som Nya Zeeland i denna turnering - favoriten, förutom att de gick hela vägen. Den här gången är det svårare för igår var det 50-50." Samtidigt, England -- försöker bli den första nationen att försvara VM-titeln -- avslöjade att stjärna kicker Jonny Wilkinson återigen hade problem med matchbollarna under semifinalen. Flughalvan, som uttryckte sin oro efter att ha kämpat med stöveln mot Australien, avvisade en boll innan han sparkade en vital trepoängare mot Frankrike. "Vi sa det inte förra veckan men en icke-match bollen kom ut på fältet i Marseille som Jonny sparkade," chef för rugby Rob Andrew sade. "Han tänkte inte på det när han sparkade det. Matchbollarna är märkta, numrerade ett till sex. Igår kväll hade de "World Cup semifinal England vs Frankrike" skrivet på dem. På matchkvällen var Jonny vaksam när han sparkade för mål att de faktiskt var matchbollar han sparkade. "Träningsbollarna förlorar tryck och form. Hela frågan förra veckan, arrangörerna accepterade alla sex matchbollar bör användas av båda sidor på torsdagen före matchen. " E-post till en vän. """ print(summarizer(ARTICLE, max_length=130, min_length=30, num_beams=10 ,do_sample=False)) >>> [{'summary_text': """ Frankrike lås Sebastien Chabal har nämnts för en farlig tackling på Englands Simon Shaw under VM semifinal i Paris. Sale Sharks framåt kommer att ställas inför en disciplinär utfrågning på måndag efter hans tackling på motsatt andra - rower Shaw noterades genom att citera kommissionär Dennis Wheelahan. Om Chabal blir avstängd missar Chabal fredagens tredje och fjärde match på Parc des Princes."""}] ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2*2 = 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 2.2349 | 1.0 | 17944 | 2.0643 | 21.9564 | 10.2133 | 17.9958 | 20.6502 | 19.9992 | | 2.0726 | 2.0 | 35888 | 2.0253 | 22.0568 | 10.3302 | 18.0648 | 20.7482 | 19.9996 | | 1.8658 | 3.0 | 53832 | 2.0333 | 22.0871 | 10.2902 | 18.0577 | 20.7082 | 19.998 | | 1.8121 | 4.0 | 71776 | 1.9759 | 22.2046 | 10.4332 | 18.1753 | 20.846 | 19.9971 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Arnold/wav2vec2-large-xlsr-hausa2-demo-colab
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: t5_assets results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5_assets This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8718 - Rouge1: 35.7712 - Rouge2: 15.2129 - Rougel: 25.9007 - Rougelsum: 33.3105 - Gen Len: 64.7175 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.22.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Arpita/opus-mt-en-ro-finetuned-synthon-to-reactant
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit --- This repo contains the text-only finetuned vision-and-language model weights used for the "How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input?" paper.
ArthurcJP/DialoGPT-small-YODA
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-08-26T06:59:38Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-mask-prompt-a-loob results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.9060317460317461 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6550802139037433 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.655786350148368 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.8043357420789328 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.95 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.631578947368421 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6412037037037037 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9245140876902215 - name: F1 (macro) type: f1_macro value: 0.9208294548760101 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8814553990610329 - name: F1 (macro) type: f1_macro value: 0.7355497663400952 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7128927410617552 - name: F1 (macro) type: f1_macro value: 0.7065924774146382 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9646657856298254 - name: F1 (macro) type: f1_macro value: 0.8945677578632619 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9081792541523034 - name: F1 (macro) type: f1_macro value: 0.906414518159255 --- # relbert/roberta-large-semeval2012-mask-prompt-a-loob RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-a-loob/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6550802139037433 - Accuracy on SAT: 0.655786350148368 - Accuracy on BATS: 0.8043357420789328 - Accuracy on U2: 0.631578947368421 - Accuracy on U4: 0.6412037037037037 - Accuracy on Google: 0.95 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-a-loob/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9245140876902215 - Micro F1 score on CogALexV: 0.8814553990610329 - Micro F1 score on EVALution: 0.7128927410617552 - Micro F1 score on K&H+N: 0.9646657856298254 - Micro F1 score on ROOT09: 0.9081792541523034 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-a-loob/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.9060317460317461 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-mask-prompt-a-loob") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <subj> is the <mask> of <obj> - loss_function: info_loob - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 21 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-a-loob/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
Ashok/my-new-tokenizer
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: electra-small-discriminator-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-small-discriminator-finetuned-squad This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1658 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.3825 | 1.0 | 5533 | 1.2656 | | 1.1783 | 2.0 | 11066 | 1.1815 | | 1.0474 | 3.0 | 16599 | 1.1658 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Augustvember/WOKKAWOKKA
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wikiann metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-uncased-tajik-ner results: - task: name: Token Classification type: token-classification dataset: name: wikiann type: wikiann config: tg split: train+test args: tg metrics: - name: Precision type: precision value: 0.5042016806722689 - name: Recall type: recall value: 0.5769230769230769 - name: F1 type: f1 value: 0.5381165919282511 - name: Accuracy type: accuracy value: 0.848129958443521 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-tajik-ner This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 1.2137 - Precision: 0.5042 - Recall: 0.5769 - F1: 0.5381 - Accuracy: 0.8481 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 2.0 | 50 | 0.9499 | 0.0450 | 0.0962 | 0.0613 | 0.6626 | | No log | 4.0 | 100 | 0.7348 | 0.1549 | 0.2115 | 0.1789 | 0.7401 | | No log | 6.0 | 150 | 0.6685 | 0.1916 | 0.3077 | 0.2362 | 0.8017 | | No log | 8.0 | 200 | 0.7875 | 0.3923 | 0.4904 | 0.4359 | 0.8036 | | No log | 10.0 | 250 | 0.7495 | 0.4225 | 0.5769 | 0.4878 | 0.8274 | | No log | 12.0 | 300 | 0.8934 | 0.4198 | 0.5288 | 0.4681 | 0.8085 | | No log | 14.0 | 350 | 0.9455 | 0.4758 | 0.5673 | 0.5175 | 0.8236 | | No log | 16.0 | 400 | 0.9469 | 0.5893 | 0.6346 | 0.6111 | 0.8410 | | No log | 18.0 | 450 | 0.9936 | 0.5333 | 0.6154 | 0.5714 | 0.8485 | | 0.2726 | 20.0 | 500 | 0.9804 | 0.5 | 0.6058 | 0.5478 | 0.8519 | | 0.2726 | 22.0 | 550 | 1.1035 | 0.5963 | 0.625 | 0.6103 | 0.8432 | | 0.2726 | 24.0 | 600 | 1.0318 | 0.5856 | 0.625 | 0.6047 | 0.8576 | | 0.2726 | 26.0 | 650 | 1.1820 | 0.4921 | 0.5962 | 0.5391 | 0.8221 | | 0.2726 | 28.0 | 700 | 1.1204 | 0.4878 | 0.5769 | 0.5286 | 0.8311 | | 0.2726 | 30.0 | 750 | 1.1911 | 0.5357 | 0.5769 | 0.5556 | 0.8376 | | 0.2726 | 32.0 | 800 | 1.1747 | 0.5259 | 0.5865 | 0.5545 | 0.8394 | | 0.2726 | 34.0 | 850 | 1.1403 | 0.5872 | 0.6154 | 0.6009 | 0.8542 | | 0.2726 | 36.0 | 900 | 1.1824 | 0.5370 | 0.5577 | 0.5472 | 0.8330 | | 0.2726 | 38.0 | 950 | 1.1467 | 0.5424 | 0.6154 | 0.5766 | 0.8440 | | 0.003 | 40.0 | 1000 | 1.2148 | 0.5268 | 0.5673 | 0.5463 | 0.8360 | | 0.003 | 42.0 | 1050 | 1.3478 | 0.5273 | 0.5577 | 0.5421 | 0.8266 | | 0.003 | 44.0 | 1100 | 1.2137 | 0.5042 | 0.5769 | 0.5381 | 0.8481 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Ayato/DialoGTP-large-Yuri
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en license: apache-2.0 library_name: keras tags: - doe2vec - exploratory-landscape-analysis - autoencoders datasets: - BasStein/250000-randomfunctions-2d metrics: - mse co2_eq_emissions: emissions: 0.0363 source: "code carbon" training_type: "pre-training" geographical_location: "Leiden, The Netherlands" hardware_used: "1 Tesla T4" --- ## Model description DoE2Vec model that can transform any design of experiments (function landscape) to a feature vector. For different input dimensions or sample size you require a different model. Each model name is build up like doe2vec-d{dimension\}-m{sample size}-ls{latent size}-{AE or VAE}-kl{Kl loss weight} Example code of loading this huggingface model using the doe2vec package. First install the package ```zsh pip install doe2vec ``` Then import and load the model. ```python from doe2vec import doe_model obj = doe_model( 2, 8, latent_dim=24, kl_weight=0.001, model_type="VAE" ) obj.load_from_huggingface() #test the model obj.plot_label_clusters_bbob() ``` ## Intended uses & limitations The model is intended to be used to generate feature representations for optimization function landscapes. The representations can then be used for downstream tasks such as automatic optimization pipelines and meta-learning. ## Training procedure The model is trained using a weighed KL loss and mean squared error reconstruction loss. The model is trained using 250.000 randomly generated functions (see the dataset) over 100 epochs. - **Hardware:** 1x Tesla T4 GPU - **Optimizer:** Adam
Ayham/albert_gpt2_summarization_xsum
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:xsum", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.77 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="mdround/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
BSC-LT/roberta-large-bne
[ "pytorch", "roberta", "fill-mask", "es", "dataset:bne", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
24
null
--- license: cc-by-4.0 language: mr datasets: - L3Cube-MahaCorpus --- A MahaBERT (l3cube-pune/marathi-bert-v2) model finetuned on random 1 million Marathi Tweets. More details on the dataset, models, and baseline results can be found in our [paper] (<a href='https://arxiv.org/abs/2210.04267'> link </a>) Released under project: https://github.com/l3cube-pune/MarathiNLP ``` @article{gokhale2022spread, title={Spread Love Not Hate: Undermining the Importance of Hateful Pre-training for Hate Speech Detection}, author={Gokhale, Omkar and Kane, Aditya and Patankar, Shantanu and Chavan, Tanmay and Joshi, Raviraj}, journal={arXiv preprint arXiv:2210.04267}, year={2022} } ```
BW/TEST
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- language: - ja tags: - t5 - text2text-generation - seq2seq license: cc-by-sa-4.0 datasets: - wikipedia - oscar - cc100 --- # 日本語T5 Prefix Language Model This is a T5 (Text-to-Text Transfer Transformer) Adapted Language Model fine-tuned on Japanese corpus. このモデルは日本語T5事前学習済みモデル([sonoisa/t5-base-japanese-v1.1](https://huggingface.co/sonoisa/t5-base-japanese-v1.1))を初期値にして、[Adapted Language Modelタスク](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k)(与えられたトークン列の続きのトークン列を予測するタスク)用に100Kステップ追加学習したものです。 追加学習には次の日本語コーパス(約100GB)を用いました。 * [Wikipedia](https://ja.wikipedia.org)の日本語ダンプデータ (2022年6月27日時点のもの) * [OSCAR](https://oscar-corpus.com)の日本語コーパス * [CC-100](http://data.statmt.org/cc-100/)の日本語コーパス # サンプルコード ```python import torch from torch.utils.data import Dataset, DataLoader from transformers import T5ForConditionalGeneration, T5Tokenizer import textwrap tokenizer = T5Tokenizer.from_pretrained("sonoisa/t5-prefixlm-base-japanese", is_fast=False) trained_model = T5ForConditionalGeneration.from_pretrained("sonoisa/t5-prefixlm-base-japanese") # GPUの利用有無 USE_GPU = torch.cuda.is_available() if USE_GPU: trained_model.cuda() # 推論モード設定 trained_model.eval() # 前処理とトークナイズを行う inputs = [normalize_text("深層学習(ディープラーニング)とは、")] batch = tokenizer.batch_encode_plus( inputs, max_length=1024, truncation=True, padding="longest", return_tensors="pt") input_ids = batch['input_ids'] input_mask = batch['attention_mask'] if USE_GPU: input_ids = input_ids.cuda() input_mask = input_mask.cuda() # 生成処理を行う outputs = trained_model.generate( input_ids=input_ids, attention_mask=input_mask, max_length=256, temperature=1.0, # 生成にランダム性を入れる温度パラメータ num_beams=10, # ビームサーチの探索幅 diversity_penalty=1.0, # 生成結果の多様性を生み出すためのペナルティパラメータ num_beam_groups=10, # ビームサーチのグループ num_return_sequences=10, # 生成する文の数 repetition_penalty=2.0, # 同じ文の繰り返し(モード崩壊)へのペナルティ ) # 生成されたトークン列を文字列に変換する generated_bodies = [tokenizer.decode(ids, skip_special_tokens=True, clean_up_tokenization_spaces=False) for ids in outputs] # 生成された文章を表示する for i, body in enumerate(generated_bodies): print("\n".join(textwrap.wrap(f"{i+1:2}. {body}"))) ``` 実行結果: ``` 1. 様々なデータから、そのデータを抽出し、解析する技術です。深層学習とは、ディープラーニングの手法の一つです。ディープラーニングは、コン ピュータが行う処理を機械学習で実現する技術です。ディープラーニングは、コンピュータが行う処理を機械学習で実現する技術です。ディープラーニング は、コンピュータが行う処理を機械学習で実現する技術です。ディープラーニングは、コンピュータが行う処理を機械学習で実現する技術です。 2. 様々なデータから、そのデータを抽出し、解析する技術です。深層学習とは、ディープラーニングの手法の一つです。ディープラーニングは、コン ピュータが行う処理を機械学習で実現する技術です。 3. ディープラーニングの手法のひとつで、人間の脳をモデル化し、そのデータを解析する手法である。深層学習は、コンピュータが処理するデータの 量や質を予測し、それを機械学習に応用するものである。この手法は、人工知能(ai)の分野において広く用いられている。 4. 深層学習(deep learning)の手法の一つ。ディープラーニングとは、人間の脳に蓄積されたデータを解析し、そのデータから得られ た情報を分析して、それを機械学習や人工知能などの機械学習に応用する手法である。ディープラーニングは、コンピュータが処理するデータの量と質を測 定し、その結果を機械学習で表現する手法である。ディープラーニングは、人間が行う作業を機械学習で表現する手法である。ディープラーニングは、人間 にとって最も身近な技術であり、多くの人が利用している。ディープラーニングは、コンピューターの処理能力を向上させるために開発された。 5. 人間の脳の深層学習を応用した人工知能(ai)である。ディープラーニングは、コンピュータが処理するデータを解析し、そのデータから得られ た結果を分析して、それを機械学習やディープラーニングに変換して学習するものである。ディープラーニングは、人間が行う作業を機械学習で再現するこ とを可能にする。 6. 人間の脳の深層学習を応用した人工知能(ai)である。ディープラーニングは、コンピュータが処理するデータを解析し、そのデータから得られ た結果を分析して、それを機械学習やディープラーニングに変換して学習するものである。ディープラーニングは、人間が行う作業を機械学習で再現するこ とを可能にする。ディープラーニングは、人間と機械との対話を通じて、人間の脳の仕組みを理解することができる。 7. 深層学習によって、人間の脳の神経細胞や神経細胞を活性化し、その働きを解明する手法です。ディープラーニングは、コンピュータが処理するデ ータの量と質を測定することで、人間が行う作業の効率化を図る手法です。また、機械学習では、人間に与えられたデータを解析して、それを機械学習で処 理する技術です。このため、機械学習では、機械学習のアルゴリズムを組み合わせることで、より精度の高い処理を実現します。さらに、機械学習では、機 械学習のアルゴリズムを組み合わせることで、より精度の高い処理を実現します。 8. 学習したデータを解析し、そのデータの分析を行う手法。ディープラーニングは、コンピュータが人間の脳に与える影響を予測する技術である。深 層学習は、人間が脳に与える影響を予測する技術である。ディープラーニングは、人間と機械との相互作用によって行われる。例えば、ロボットが物体を分 解して、それを検出したり、物体を分解して、その物体を別の物体として認識したりする。また、物体を分解して、その物体を別の物体として認識する。こ のプロセスは、コンピューターが行う処理よりもはるかに高速である。ディープラーニングは、コンピュータが行う処理よりも速く、より効率的な処理が可 能である。 9. 深層学習によって、人間の脳の神経細胞や神経細胞を活性化し、その働きを解明する手法です。ディープラーニングは、コンピュータが処理するデ ータの量と質を測定することで、人間が行う作業の効率化を図る手法です。 10. 学習したデータを解析し、そのデータの分析を行う手法。ディープラーニングは、コンピュータが人間の脳に与える影響を予測する技術である。深 層学習は、人間が脳に与える影響を予測する技術である。ディープラーニングは、人間と機械との相互作用によって行われる。例えば、ロボットが物体を分 解して、それを検出したり、物体を分解して、その物体を別の物体として認識したりする。また、物体を分解して、その物体を別の物体として認識する。こ のプロセスは、コンピューターが行う処理よりもはるかに高速である。 ``` # 免責事項 本モデルの作者は本モデルを作成するにあたって、その内容、機能等について細心の注意を払っておりますが、モデルの出力が正確であるかどうか、安全なものであるか等について保証をするものではなく、何らの責任を負うものではありません。本モデルの利用により、万一、利用者に何らかの不都合や損害が発生したとしても、モデルやデータセットの作者や作者の所属組織は何らの責任を負うものではありません。利用者には本モデルやデータセットの作者や所属組織が責任を負わないことを明確にする義務があります。 # ライセンス [CC-BY SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.ja) [Common Crawlの利用規約](http://commoncrawl.org/terms-of-use/)も守るようご注意ください。 # 謝辞 本モデルはオージス総研 鵜野和也さんによる記事「[はじめての自然言語処理 第21回 T5X と Prompt Tuning の検証](https://www.ogis-ri.co.jp/otc/hiroba/technical/similar-document-search/part21.html)」の内容をもとに作成しました。素晴らしい記事を公開していただけたことに大変感謝いたします。
Backedman/DialoGPT-small-Anika
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- language: hu license: apache-2.0 datasets: - common_crawl - wikipedia tags: - byte representation - gradient boosting - hungarian --- # Charmen-Electra A byte-based transformer model trained on Hungarian language. In order to use the model you will need a custom Tokenizer which is available at: [https://github.com/szegedai/byte-offset-tokenizer](https://github.com/szegedai/byte-offset-tokenizer). Since we use a custom architecture with Gradient Boosting, Down- and Up-Sampling, you have to enable Trusted Remote Code like: ```python model = AutoModel.from_pretrained("SzegedAI/charmen-electra", trust_remote_code=True) ``` # Acknowledgement [![Artificial Intelligence - National Laboratory - Hungary](https://milab.tk.hu/uploads/images/milab_logo_en.png)](https://mi.nemzetilabor.hu/)
Bagus/ser-japanese
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: distilbert-finetuned-dapt_tapt-lm-ai results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-finetuned-dapt_tapt-lm-ai This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -934, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results ### Framework versions - Transformers 4.20.1 - TensorFlow 2.6.4 - Datasets 2.1.0 - Tokenizers 0.12.1
BaptisteDoyen/camembert-base-xnli
[ "pytorch", "tf", "camembert", "text-classification", "fr", "dataset:xnli", "transformers", "zero-shot-classification", "xnli", "nli", "license:mit", "has_space" ]
zero-shot-classification
{ "architectures": [ "CamembertForSequenceClassification" ], "model_type": "camembert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
405,474
2022-08-27T12:28:14Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: recipe-gauss-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # recipe-gauss-3 This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2729 - Rmse: 0.5224 - Mse: 0.2729 - Mae: 0.3977 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | 0.2602 | 1.0 | 745 | 0.2613 | 0.5112 | 0.2613 | 0.4000 | | 0.2562 | 2.0 | 1490 | 0.2595 | 0.5094 | 0.2595 | 0.3958 | | 0.2542 | 3.0 | 2235 | 0.2595 | 0.5094 | 0.2595 | 0.3932 | | 0.2521 | 4.0 | 2980 | 0.2615 | 0.5114 | 0.2615 | 0.4016 | | 0.2503 | 5.0 | 3725 | 0.2594 | 0.5093 | 0.2594 | 0.3913 | | 0.2473 | 6.0 | 4470 | 0.2641 | 0.5139 | 0.2641 | 0.3834 | | 0.244 | 7.0 | 5215 | 0.2649 | 0.5147 | 0.2649 | 0.3986 | | 0.2401 | 8.0 | 5960 | 0.2661 | 0.5158 | 0.2661 | 0.3906 | | 0.2359 | 9.0 | 6705 | 0.2693 | 0.5189 | 0.2693 | 0.3946 | | 0.2332 | 10.0 | 7450 | 0.2729 | 0.5224 | 0.2729 | 0.3977 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.9.0+cu111 - Datasets 2.4.0 - Tokenizers 0.12.1
Barkavi/totto-t5-base-bert-score-121K
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
51
2022-08-27T12:42:28Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-mask-prompt-e-loob results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8682936507936508 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6176470588235294 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6231454005934718 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7570872707059477 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.874 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6008771929824561 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6226851851851852 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9311435889709206 - name: F1 (macro) type: f1_macro value: 0.9268380061574883 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8744131455399061 - name: F1 (macro) type: f1_macro value: 0.7267491613759859 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7053087757313109 - name: F1 (macro) type: f1_macro value: 0.694918491135901 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9652222299506156 - name: F1 (macro) type: f1_macro value: 0.8967485289493923 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8965841429019116 - name: F1 (macro) type: f1_macro value: 0.8952392246946669 --- # relbert/roberta-large-semeval2012-mask-prompt-e-loob RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-e-loob/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6176470588235294 - Accuracy on SAT: 0.6231454005934718 - Accuracy on BATS: 0.7570872707059477 - Accuracy on U2: 0.6008771929824561 - Accuracy on U4: 0.6226851851851852 - Accuracy on Google: 0.874 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-e-loob/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9311435889709206 - Micro F1 score on CogALexV: 0.8744131455399061 - Micro F1 score on EVALution: 0.7053087757313109 - Micro F1 score on K&H+N: 0.9652222299506156 - Micro F1 score on ROOT09: 0.8965841429019116 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-e-loob/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8682936507936508 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-mask-prompt-e-loob") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity - template_mode: manual - template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <obj> is <subj>’s <mask> - loss_function: info_loob - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 22 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-e-loob/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
Barytes/hellohf
[ "tf", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
2022-08-27T13:05:50Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1629 - F1: 0.8572 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2937 | 1.0 | 715 | 0.1778 | 0.8292 | | 0.1471 | 2.0 | 1430 | 0.1583 | 0.8507 | | 0.0938 | 3.0 | 2145 | 0.1629 | 0.8572 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu116 - Datasets 2.4.0 - Tokenizers 0.12.1