modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-27 18:27:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-27 18:22:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
takaha202306/facelora
|
takaha202306
| 2023-06-17T17:54:59Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-17T16:17:55Z |
---
license: creativeml-openrail-m
---
editing
# about
This repository contains two Stable Diffusion LoRAs for generate "Cute MOE face" illustration, and their training data.
There was trained by [kohya-ss's sd_scripts](https://github.com/kohya-ss/sd-scripts), and tested on [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui).
The base model is almost AOM3, the model I used was added a little merge as:
[AOM3.safetensors](https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix3/AOM3_orangemixs.safetensors) + [Hipoly 3D Model LoRA v2.0](https://civitai.com/models/8730/hipoly-3d-model-lora):0.15:0,1,1,1,1,1,1,0.1,0.1,0.1,0.1,1,1,1,0.3,0.1,0.1.
Hipoly 3d LoRA was merged by [SuperMerger](https://github.com/hako-mikan/sd-webui-supermerger).
They don't require "trigger word", simply changes illustration's style when applied.
I recommend to set weight between 0.05 to 0.4, and use [LoRA Block Weight extension](https://github.com/hako-mikan/sd-webui-lora-block-weight).
# facelora1
First LoRA is actually not made for this purpose. I attempted to make "Uchinoko"(means my daughter, favorite original character, lol) LoRA.
Train data are my favorite images(those are generated above model by me), and augmentation images made by [controlnet's reference-only](https://github.com/Mikubill/sd-webui-controlnet#reference-only-control).
This LoRA worked well, worked better than I intended, and could be used to generate beautiful girls in a generic way.
# facelora2
I generated various beautiful girls images with above LoRA, and made second LoRA trained by them.
This LoRA is more suitable for general purpose generation.
# samples
|
Atnafu/amharic_xlmr_base
|
Atnafu
| 2023-06-17T17:36:41Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-17T02:34:11Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: amh_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amh_base
This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3301
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.13.0
- Tokenizers 0.13.3
|
mmirmahdi/q-FrozenLake-v1-4x4-noSlippery
|
mmirmahdi
| 2023-06-17T17:22:53Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T17:22:23Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mmirmahdi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
0xghagevaibhav/meMyself
|
0xghagevaibhav
| 2023-06-17T17:11:20Z | 0 | 0 | null |
[
"hi",
"license:unknown",
"region:us"
] | null | 2023-06-17T17:09:19Z |
---
license: unknown
language:
- hi
---
|
erens/mikasalast
|
erens
| 2023-06-17T17:01:46Z | 30 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-17T16:46:29Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### mikasaLAST Dreambooth model trained by erens with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
DionTimmer/controlnet_qrcode
|
DionTimmer
| 2023-06-17T16:33:13Z | 2,526 | 306 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"controlnet",
"en",
"license:openrail++",
"region:us"
] | null | 2023-06-15T02:23:37Z |
---
tags:
- stable-diffusion
- controlnet
license: openrail++
language:
- en
---
# QR Code Conditioned ControlNet Models for Stable Diffusion 1.5 and 2.1

## Model Description
These ControlNet models have been trained on a large dataset of 150,000 QR code + QR code artwork couples. They provide a solid foundation for generating QR code-based artwork that is aesthetically pleasing, while still maintaining the integral QR code shape.
The Stable Diffusion 2.1 version is marginally more effective, as it was developed to address my specific needs. However, a 1.5 version model was also trained on the same dataset for those who are using the older version.
Separate repos for usage in diffusers can be found here:<br>
1.5: https://huggingface.co/DionTimmer/controlnet_qrcode-control_v1p_sd15<br>
2.1: https://huggingface.co/DionTimmer/controlnet_qrcode-control_v11p_sd21<br>
## How to use with Diffusers
```bash
pip -q install diffusers transformers accelerate torch xformers
```
```python
import torch
from PIL import Image
from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, DDIMScheduler
from diffusers.utils import load_image
controlnet = ControlNetModel.from_pretrained("DionTimmer/controlnet_qrcode-control_v1p_sd15",
torch_dtype=torch.float16)
pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
controlnet=controlnet,
safety_checker=None,
torch_dtype=torch.float16
)
pipe.enable_xformers_memory_efficient_attention()
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
def resize_for_condition_image(input_image: Image, resolution: int):
input_image = input_image.convert("RGB")
W, H = input_image.size
k = float(resolution) / min(H, W)
H *= k
W *= k
H = int(round(H / 64.0)) * 64
W = int(round(W / 64.0)) * 64
img = input_image.resize((W, H), resample=Image.LANCZOS)
return img
# play with guidance_scale, controlnet_conditioning_scale and strength to make a valid QR Code Image
# qr code image
source_image = load_image("https://s3.amazonaws.com/moonup/production/uploads/6064e095abd8d3692e3e2ed6/A_RqHaAM6YHBodPLwqtjn.png")
# initial image, anything
init_image = load_image("https://s3.amazonaws.com/moonup/production/uploads/noauth/KfMBABpOwIuNolv1pe3qX.jpeg")
condition_image = resize_for_condition_image(source_image, 768)
init_image = resize_for_condition_image(init_image, 768)
generator = torch.manual_seed(123121231)
image = pipe(prompt="a bilboard in NYC with a qrcode",
negative_prompt="ugly, disfigured, low quality, blurry, nsfw",
image=init_image,
control_image=condition_image,
width=768,
height=768,
guidance_scale=20,
controlnet_conditioning_scale=1.5,
generator=generator,
strength=0.9,
num_inference_steps=150,
)
image.images[0]
```
## Performance and Limitations
These models perform quite well in most cases, but please note that they are not 100% accurate. In some instances, the QR code shape might not come through as expected. You can increase the ControlNet weight to emphasize the QR code shape. However, be cautious as this might negatively impact the style of your output.**To optimize for scanning, please generate your QR codes with correction mode 'H' (30%).**
To balance between style and shape, a gentle fine-tuning of the control weight might be required based on the individual input and the desired output, aswell as the correct prompt. Some prompts do not work until you increase the weight by a lot. The process of finding the right balance between these factors is part art and part science. For the best results, it is recommended to generate your artwork at a resolution of 768. This allows for a higher level of detail in the final product, enhancing the quality and effectiveness of the QR code-based artwork.
## Installation
The simplest way to use this is to place the .safetensors model and its .yaml config file in the folder where your other controlnet models are installed, which varies per application.
For usage in auto1111 they can be placed in the webui/models/ControlNet folder. They can be loaded using the controlnet webui extension which you can install through the extensions tab in the webui (https://github.com/Mikubill/sd-webui-controlnet). Make sure to enable your controlnet unit and set your input image as the QR code. Set the model to either the SD2.1 or 1.5 version depending on your base stable diffusion model, or it will error. No pre-processor is needed, though you can use the invert pre-processor for a different variation of results. 768 is the preferred resolution for generation since it allows for more detail.
Make sure to look up additional info on how to use controlnet if you get stuck, once you have the webui up and running its really easy to install the controlnet extension aswell.
  
|
dnjdsxor21/roberta-klue-ssm
|
dnjdsxor21
| 2023-06-17T16:32:52Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"ko",
"dataset:dnjdsxor21/preprocessed-wiki-kor",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-11T13:40:11Z |
---
widget:
- text: ์ด์์ ์ [MASK] ์ค๊ธฐ์ ๋ฌด์ ์ด๋ค.
- text: ์ค๋ฐ๋ง๋ ๋ฏธ๊ตญ์ [MASK] ์ด๋ค.
language:
- ko
pipeline_tag: fill-mask
datasets:
- dnjdsxor21/preprocessed-wiki-kor
mask_token: '[MASK]'
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
mustika/alan3
|
mustika
| 2023-06-17T16:21:59Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-17T15:09:00Z |
---
license: creativeml-openrail-m
---
|
vlkn/flan-t5-small-taboo-for-llms
|
vlkn
| 2023-06-17T16:20:59Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-03T13:32:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-small-taboo-for-llms
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-taboo-for-llms
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4825
- Rouge1: 27.3897
- Rouge2: 9.9232
- Rougel: 24.2026
- Rougelsum: 24.6485
- Gen Len: 18.5172
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 137 | 2.5897 | 26.6789 | 9.9538 | 23.6637 | 24.2407 | 18.3621 |
| No log | 2.0 | 274 | 2.5560 | 25.4162 | 9.6277 | 22.7084 | 23.0883 | 18.3966 |
| No log | 3.0 | 411 | 2.5377 | 26.0239 | 9.7748 | 23.4425 | 23.7935 | 18.6034 |
| 2.8204 | 4.0 | 548 | 2.5241 | 26.6294 | 9.9168 | 23.8023 | 24.2756 | 18.7241 |
| 2.8204 | 5.0 | 685 | 2.5120 | 25.8274 | 9.9333 | 23.8865 | 24.0724 | 18.7586 |
| 2.8204 | 6.0 | 822 | 2.5031 | 26.7774 | 9.9651 | 24.3654 | 24.6102 | 18.6034 |
| 2.8204 | 7.0 | 959 | 2.4985 | 26.5058 | 10.0422 | 24.0403 | 24.635 | 18.4655 |
| 2.6101 | 8.0 | 1096 | 2.4934 | 26.6953 | 9.9536 | 24.0293 | 24.6809 | 18.4655 |
| 2.6101 | 9.0 | 1233 | 2.4907 | 26.7978 | 9.6249 | 23.714 | 23.9992 | 18.6034 |
| 2.6101 | 10.0 | 1370 | 2.4847 | 27.2135 | 9.878 | 23.8398 | 24.2389 | 18.5 |
| 2.4726 | 11.0 | 1507 | 2.4856 | 27.1799 | 9.9337 | 23.9393 | 24.4067 | 18.5172 |
| 2.4726 | 12.0 | 1644 | 2.4835 | 27.4491 | 10.1828 | 24.0926 | 24.4819 | 18.5 |
| 2.4726 | 13.0 | 1781 | 2.4825 | 27.3897 | 9.9232 | 24.2026 | 24.6485 | 18.5172 |
| 2.4726 | 14.0 | 1918 | 2.4836 | 27.5567 | 10.7405 | 24.2497 | 24.6566 | 18.5345 |
| 2.3731 | 15.0 | 2055 | 2.4872 | 27.7517 | 11.0182 | 24.1007 | 24.7218 | 18.4828 |
| 2.3731 | 16.0 | 2192 | 2.4852 | 27.3461 | 11.3381 | 24.084 | 24.5125 | 18.4655 |
| 2.3731 | 17.0 | 2329 | 2.4872 | 27.3558 | 11.1005 | 24.047 | 24.4973 | 18.4655 |
| 2.3731 | 18.0 | 2466 | 2.4841 | 26.9427 | 10.9288 | 23.7324 | 24.4298 | 18.5345 |
| 2.2967 | 19.0 | 2603 | 2.4881 | 27.5 | 10.8437 | 24.1593 | 24.6028 | 18.4483 |
| 2.2967 | 20.0 | 2740 | 2.4908 | 27.517 | 11.0039 | 24.1049 | 24.7111 | 18.5 |
| 2.2967 | 21.0 | 2877 | 2.4917 | 27.7333 | 10.935 | 24.4076 | 24.9887 | 18.4138 |
| 2.2553 | 22.0 | 3014 | 2.4926 | 27.6275 | 10.7562 | 24.2295 | 24.7476 | 18.4138 |
| 2.2553 | 23.0 | 3151 | 2.4945 | 27.9085 | 10.943 | 24.6135 | 25.2373 | 18.4138 |
| 2.2553 | 24.0 | 3288 | 2.4948 | 27.5261 | 10.7141 | 24.2429 | 24.816 | 18.4138 |
| 2.2553 | 25.0 | 3425 | 2.4931 | 27.5522 | 10.8702 | 24.5576 | 25.0714 | 18.4655 |
| 2.213 | 26.0 | 3562 | 2.4942 | 27.4758 | 11.0064 | 24.5062 | 25.05 | 18.4655 |
| 2.213 | 27.0 | 3699 | 2.4954 | 27.6967 | 11.1744 | 24.7646 | 25.3172 | 18.4655 |
| 2.213 | 28.0 | 3836 | 2.4951 | 27.7428 | 10.9365 | 24.6427 | 25.2432 | 18.5172 |
| 2.213 | 29.0 | 3973 | 2.4949 | 27.6877 | 10.9522 | 24.6101 | 25.2471 | 18.4655 |
| 2.1865 | 30.0 | 4110 | 2.4952 | 27.7295 | 11.0173 | 24.6556 | 25.2397 | 18.4655 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
rohitp1/ws_w2lm_base_distill_noisy_teacher_libri_epochs_50_batch_8
|
rohitp1
| 2023-06-17T15:57:16Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wavlm",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-02T12:05:05Z |
---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: ws_w2lm_base_distill_noisy_teacher_libri_epochs_50_batch_8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ws_w2lm_base_distill_noisy_teacher_libri_epochs_50_batch_8
This model is a fine-tuned version of [rohitp1/kkkh_w2lm_base_plus_finetune_teacher_noise_libri360_50_epochs_batch_16](https://huggingface.co/rohitp1/kkkh_w2lm_base_plus_finetune_teacher_noise_libri360_50_epochs_batch_16) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0945
- Wer: 0.1041
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0562 | 2.46 | 250 | 0.0741 | 0.1135 |
| 0.0538 | 4.92 | 500 | 0.0736 | 0.1126 |
| 0.0506 | 7.38 | 750 | 0.0751 | 0.1116 |
| 0.0465 | 9.84 | 1000 | 0.0752 | 0.1099 |
| 0.0424 | 12.31 | 1250 | 0.0762 | 0.1089 |
| 0.0385 | 14.77 | 1500 | 0.0790 | 0.1078 |
| 0.0355 | 17.23 | 1750 | 0.0788 | 0.1062 |
| 0.0335 | 19.69 | 2000 | 0.0795 | 0.1053 |
| 0.0314 | 22.15 | 2250 | 0.0825 | 0.1052 |
| 0.0298 | 24.61 | 2500 | 0.0837 | 0.1055 |
| 0.0285 | 27.07 | 2750 | 0.0873 | 0.1049 |
| 0.0274 | 29.53 | 3000 | 0.0868 | 0.1043 |
| 0.0266 | 32.0 | 3250 | 0.0891 | 0.1044 |
| 0.0256 | 34.46 | 3500 | 0.0902 | 0.1044 |
| 0.0251 | 36.92 | 3750 | 0.0911 | 0.1044 |
| 0.0247 | 39.38 | 4000 | 0.0926 | 0.1042 |
| 0.0242 | 41.84 | 4250 | 0.0936 | 0.1042 |
| 0.0238 | 44.3 | 4500 | 0.0940 | 0.1042 |
| 0.0235 | 46.76 | 4750 | 0.0938 | 0.1042 |
| 0.0233 | 49.22 | 5000 | 0.0945 | 0.1041 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.13.1
- Datasets 2.7.1
- Tokenizers 0.11.0
|
Elvenson/stable_diffusion_weights
|
Elvenson
| 2023-06-17T15:50:04Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-04-15T04:59:14Z |
---
license: openrail
---
# Stable Diffusion Model Weights
This repo is mainly for storing the Keras weights for Stable Diffusion models. The model is adapted
from [here](https://github.com/keras-team/keras-cv/tree/master/keras_cv/models/stable_diffusion).
|
l3cube-pune/hindi-marathi-dev-albert
|
l3cube-pune
| 2023-06-17T15:34:53Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"fill-mask",
"hi",
"mr",
"multilingual",
"arxiv:2211.11418",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-19T19:19:20Z |
---
language:
- hi
- mr
- multilingual
license: cc-by-4.0
---
## DevAlBERT
DevAlBERT is a Devanagari AlBERT model model trained on publicly available Hindi and Marathi monolingual datasets.
[project link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [<a href='https://arxiv.org/abs/2211.11418'> paper </a>] .
Citing:
```
@article{joshi2022l3cubehind,
title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages},
author={Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11418},
year={2022}
}
```
Other Monolingual Indic BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-bert-v2'> Marathi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-roberta'> Marathi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-albert'> Marathi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-bert-v2'> Hindi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-roberta'> Hindi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-albert'> Hindi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-bert'> Dev BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-roberta'> Dev RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-albert'> Dev AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-bert'> Kannada BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-bert'> Telugu BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-bert'> Malayalam BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-bert'> Tamil BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-bert'> Gujarati BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-bert'> Oriya BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-bert'> Bengali BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-bert'> Punjabi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/assamese-bert'> Assamese BERT </a> <br>
|
l3cube-pune/kannada-bert
|
l3cube-pune
| 2023-06-17T15:32:28Z | 127 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"kn",
"arxiv:2211.11418",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-20T07:02:51Z |
---
license: cc-by-4.0
language: kn
---
## KannadaBERT
KannadaBERT is a Kannada BERT model trained on publicly available Kannada monolingual datasets.
Preliminary details on the dataset, models, and baseline results can be found in our [<a href='https://arxiv.org/abs/2211.11418'> paper </a>] .
Citing:
```
@article{joshi2022l3cubehind,
title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages},
author={Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11418},
year={2022}
}
```
Other Monolingual Indic BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-bert-v2'> Marathi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-roberta'> Marathi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-albert'> Marathi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-bert-v2'> Hindi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-roberta'> Hindi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-albert'> Hindi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-bert'> Dev BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-roberta'> Dev RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-albert'> Dev AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-bert'> Kannada BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-bert'> Telugu BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-bert'> Malayalam BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-bert'> Tamil BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-bert'> Gujarati BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-bert'> Oriya BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-bert'> Bengali BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-bert'> Punjabi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/assamese-bert'> Assamese BERT </a> <br>
|
l3cube-pune/hindi-albert
|
l3cube-pune
| 2023-06-17T15:32:02Z | 142 | 1 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"fill-mask",
"hi",
"arxiv:2211.11418",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-19T18:36:25Z |
---
license: cc-by-4.0
language: hi
---
## HindAlBERT
HindAlBERT is a Hindi AlBERT model model trained on publicly available Hindi monolingual datasets.
[project link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [<a href='https://arxiv.org/abs/2211.11418'> paper </a>] (<a href='http://dx.doi.org/10.13140/RG.2.2.14606.84809'> pdf </a>)
```
@article{joshi2022l3cubehind,
title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages},
author={Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11418},
year={2022}
}
```
Other Monolingual Indic BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-bert-v2'> Marathi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-roberta'> Marathi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-albert'> Marathi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-bert-v2'> Hindi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-roberta'> Hindi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-albert'> Hindi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-bert'> Dev BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-roberta'> Dev RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-albert'> Dev AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-bert'> Kannada BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-bert'> Telugu BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-bert'> Malayalam BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-bert'> Tamil BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-bert'> Gujarati BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-bert'> Oriya BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-bert'> Bengali BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-bert'> Punjabi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/assamese-bert'> Assamese BERT </a> <br>
|
atrytone/scibert_uncased_claim_id
|
atrytone
| 2023-06-17T15:16:47Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-18T05:07:15Z |
---
license: apache-2.0
language:
- en
---
Fine-tuned SciBERT uncased model [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) for claim detection from abstracts.
|
Hosioka/AniReal
|
Hosioka
| 2023-06-17T15:16:41Z | 52 | 75 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-Diffusion",
"stable-diffusion-diffusers",
"safetensors",
"en",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-01-20T11:45:12Z |
---
license: creativeml-openrail-m
thumbnail: "https://m1.afileditch.ch/uJoodjDNVWxDqhhQHeRH.png"
language:
- en
tags:
- text-to-image
- stable-Diffusion
- stable-diffusion-diffusers
- diffusers
- safetensors
inference: true
---
# Deprecated. Refer to this [New Version](https://huggingface.co/Hosioka/Baka-Diffusion)
This Repository contains AniReal V1.0
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
# License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the **Model** to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
|
HoseinPanahi/finbert-lm-finetuned-news
|
HoseinPanahi
| 2023-06-17T15:14:00Z | 113 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-17T12:01:59Z |
---
tags:
- generated_from_trainer
model-index:
- name: finbert-lm-finetuned-news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finbert-lm-finetuned-news
This model is a fine-tuned version of [ProsusAI/finbert](https://huggingface.co/ProsusAI/finbert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3271 | 1.0 | 2736 | 2.2334 |
| 2.0392 | 2.0 | 5472 | 2.0723 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gombalmukiyo/gombalmukiyo
|
gombalmukiyo
| 2023-06-17T15:11:32Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-17T15:09:52Z |
---
license: creativeml-openrail-m
---
|
zwzhang/Accountable-Textual-Visual-Chat
|
zwzhang
| 2023-06-17T15:10:10Z | 0 | 2 | null |
[
"arxiv:2303.05983",
"license:apache-2.0",
"region:us"
] | null | 2023-06-17T14:41:30Z |
---
license: apache-2.0
---
## Accountable Textual-Visual Chat Learns to Reject Human Instructions in Image Re-creation
[[Paper]](https://arxiv.org/pdf/2303.05983.pdf) [[Project Page]](https://matrix-alpha.github.io/) [[GitHub]](https://github.com/matrix-alpha/Accountable-Textual-Visual-Chat)

|
Bala-A87/Huggy-DRL
|
Bala-A87
| 2023-06-17T14:35:21Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-17T14:34:51Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Bala-A87/Huggy-DRL
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
Bodolaz/Unit-2
|
Bodolaz
| 2023-06-17T14:17:03Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T14:16:43Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Unit-2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Bodolaz/Unit-2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Bodolaz/q-FrozenLake-v1-4x4-noSlippery
|
Bodolaz
| 2023-06-17T13:46:43Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T13:46:27Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Bodolaz/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
catrabbitbear/lunar-lander
|
catrabbitbear
| 2023-06-17T13:43:33Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T13:43:04Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.65 +/- 38.49
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
2tle/kobart-std-to-jeju
|
2tle
| 2023-06-17T13:41:43Z | 104 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"ko",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-17T13:31:26Z |
---
license: mit
language:
- ko
metrics:
- bleu
---
# Korean Standard To Jejueo(Jeju Dialect) Translator BART Model
## Dataset
- [AI Hub Korean Jejueo(Jeju Dialect) Voice data](https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=121)
## Model Score
- BLEU: 40%
|
tux/Reinforce-copter
|
tux
| 2023-06-17T13:23:28Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T11:34:21Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-copter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 18.70 +/- 15.84
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
antphb/DS-Chatbox-bigscience-bloom-560m
|
antphb
| 2023-06-17T13:15:50Z | 151 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bloom",
"text-generation",
"generated_from_trainer",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-17T11:27:42Z |
---
license: bigscience-bloom-rail-1.0
tags:
- generated_from_trainer
model-index:
- name: DS-Chatbox-bigscience-bloom-560m
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DS-Chatbox-bigscience-bloom-560m
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 4.8320
- eval_runtime: 175.7948
- eval_samples_per_second: 37.402
- eval_steps_per_second: 4.676
- epoch: 0.03
- step: 500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 3.0
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
thiendio/ppo-from-scratch-lunar
|
thiendio
| 2023-06-17T12:26:39Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T12:26:16Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -158.29 +/- 101.80
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
samata/my_awesome_billsum_model
|
samata
| 2023-06-17T12:20:04Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-17T12:09:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1423
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5563
- Rouge1: 0.1423
- Rouge2: 0.0518
- Rougel: 0.1171
- Rougelsum: 0.1171
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8514 | 0.1229 | 0.0326 | 0.1015 | 0.1016 | 19.0 |
| No log | 2.0 | 124 | 2.6361 | 0.1308 | 0.0421 | 0.1066 | 0.1068 | 19.0 |
| No log | 3.0 | 186 | 2.5725 | 0.139 | 0.0488 | 0.1138 | 0.114 | 19.0 |
| No log | 4.0 | 248 | 2.5563 | 0.1423 | 0.0518 | 0.1171 | 0.1171 | 19.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
alexanderjoossens/w2v2-libri-10min
|
alexanderjoossens
| 2023-06-17T12:16:40Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-05-22T09:09:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: w2v2-libri-10min
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v2-libri-10min
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2500
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 1.18.3
- Tokenizers 0.13.3
|
SikongSphere/sikong-llama-7b-chinese
|
SikongSphere
| 2023-06-17T12:01:59Z | 7 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"generated_from_trainer",
"dataset:customized",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-17T09:09:19Z |
---
tags:
- generated_from_trainer
datasets:
- customized
model-index:
- name: finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune
This model is a fine-tuned version of [/root/autodl-tmp/sikong/repo/LMFlow/output_models/Linly-Chinese-LLaMA-7b-hf](https://huggingface.co//root/autodl-tmp/sikong/repo/LMFlow/output_models/Linly-Chinese-LLaMA-7b-hf) on the customized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 8
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50.0
### Training results
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.10.1
- Tokenizers 0.13.3
|
jalFaizy/ppo-lunar
|
jalFaizy
| 2023-06-17T11:42:59Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T11:42:28Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: trial1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.71 +/- 14.10
name: mean_reward
verified: false
---
# **trial1** Agent playing **LunarLander-v2**
This is a trained model of a **trial1** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bumstern/segmentation_model_russian_data
|
bumstern
| 2023-06-17T11:23:53Z | 0 | 0 |
pyannote-audio
|
[
"pyannote-audio",
"code",
"ru",
"license:mit",
"region:us"
] | null | 2023-06-17T11:02:51Z |
---
license: mit
language:
- ru
library_name: pyannote-audio
tags:
- code
---
# Segmentation model
This model was trained on AMI-MixHeadset and my own synthetic dataset of Russian speech.
Training time: 5 hours on GTX 3060
This model can be used for diarization model from [pyannote/speaker-diarization](https://huggingface.co/pyannote/speaker-diarization)
| Benchmark | DER% |
| --------- |------|
| [AMI (*headset mix,*](https://groups.inf.ed.ac.uk/ami/corpus/) [*only_words*)](https://github.com/BUTSpeechFIT/AMI-diarization-setup) | 38.8 |
## Usage example
```python
import yaml
from yaml.loader import SafeLoader
import torch
from pyannote.audio import Model
from pyannote.audio.pipelines import SpeakerDiarization
segm_model = torch.load('model/segm_model.pth', map_location=torch.device('cpu'))
embed_model = Model.from_pretrained("pyannote/embedding", use_auth_token='ACCESS_TOKEN_GOES_HERE')
diar_pipeline = SpeakerDiarization(
segmentation=segm_model,
segmentation_batch_size=16,
clustering="AgglomerativeClustering",
embedding=embed_model
)
with open('model/config.yaml', 'r') as f:
diar_config = yaml.load(f, Loader=SafeLoader)
diar_pipeline.instantiate(diar_config)
annotation = diar_pipeline('audio.wav')
```
|
Enterprize1/q-taxi-v3
|
Enterprize1
| 2023-06-17T11:15:07Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T11:14:52Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Enterprize1/q-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kejolong/hdxduniform2.0
|
kejolong
| 2023-06-17T11:07:19Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-17T11:05:58Z |
---
license: creativeml-openrail-m
---
|
antphb/DS-Chatbox-mbart-large-50
|
antphb
| 2023-06-17T11:03:57Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-17T07:09:44Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: DS-Chatbox-mbart-large-50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DS-Chatbox-mbart-large-50
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.977 | 0.19 | 100 | 0.0001 |
| 0.023 | 0.38 | 200 | 0.0002 |
| 0.0005 | 0.57 | 300 | 0.0005 |
| 0.0007 | 0.76 | 400 | 0.0006 |
| 0.0012 | 0.95 | 500 | 0.0014 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
nomad-ai/ppo-LunarLander-v2-1
|
nomad-ai
| 2023-06-17T11:01:03Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T11:00:26Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 281.34 +/- 18.86
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mattladewig/distilbert-base-uncased-finetuned-ner
|
mattladewig
| 2023-06-17T10:34:27Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-17T08:37:53Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: mattladewig/distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mattladewig/distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0342
- Validation Loss: 0.0614
- Train Precision: 0.9248
- Train Recall: 0.9365
- Train F1: 0.9306
- Train Accuracy: 0.9833
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.1951 | 0.0694 | 0.9087 | 0.9181 | 0.9134 | 0.9799 | 0 |
| 0.0530 | 0.0621 | 0.9246 | 0.9301 | 0.9273 | 0.9823 | 1 |
| 0.0342 | 0.0614 | 0.9248 | 0.9365 | 0.9306 | 0.9833 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
kevinng77/unsup_bert_L3
|
kevinng77
| 2023-06-17T10:00:06Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"bert",
"text-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-17T08:47:52Z |
---
license: apache-2.0
language:
- en
metrics:
- accuracy
- f1
pipeline_tag: text-classification
---
```python
# transformers==4.29.1
from transformers import AutoTokenizer, pipeline
from optimum.onnxruntime import ORTModelForSequenceClassification
onnx_model_path = "kevinng77/unsup_bert_L3"
tokenizer = AutoTokenizer.from_pretrained(onnx_model_path)
onnx_model = ORTModelForSequenceClassification.from_pretrained(onnx_model_path)
onnx_pipe = pipeline(task="text-classification", model=onnx_model, tokenizer=tokenizer)
onnx_pipe("How many rows are there in the table?")
```
|
parkyunmin/beatles_lyrics
|
parkyunmin
| 2023-06-17T09:38:03Z | 198 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-17T09:11:41Z |
---
tags:
- generated_from_trainer
model-index:
- name: beatles_lyrics
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beatles_lyrics
This model is a fine-tuned version of [wvangils/GPT-Medium-Beatles-Lyrics-finetuned-newlyrics](https://huggingface.co/wvangils/GPT-Medium-Beatles-Lyrics-finetuned-newlyrics) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 50 | 1.1221 |
| No log | 2.0 | 100 | 1.0710 |
| No log | 3.0 | 150 | 1.0584 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
ganghe74/distilbert-base-uncased-finetuned-emotion
|
ganghe74
| 2023-06-17T09:34:40Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-17T09:13:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.922469380812715
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2170
- Accuracy: 0.9225
- F1: 0.9225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8057 | 1.0 | 250 | 0.3170 | 0.905 | 0.9023 |
| 0.242 | 2.0 | 500 | 0.2170 | 0.9225 | 0.9225 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.0
- Tokenizers 0.13.3
|
benol/Roma_Pyatifan
|
benol
| 2023-06-17T09:10:31Z | 0 | 0 | null |
[
"ru",
"en",
"arxiv:1910.09700",
"license:unknown",
"region:us"
] | null | 2023-06-17T08:58:04Z |
---
license: unknown
language:
- ru
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
parkyunmin/my_awesome_eli5_clm-model
|
parkyunmin
| 2023-06-17T09:09:15Z | 211 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-17T05:54:26Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5380
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 49 | 1.6679 |
| No log | 2.0 | 98 | 1.5629 |
| No log | 3.0 | 147 | 1.5380 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
PabloQuant29/ppo-LunarLander-v2
|
PabloQuant29
| 2023-06-17T08:36:13Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T08:35:40Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 244.46 +/- 18.98
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mustika/alan2
|
mustika
| 2023-06-17T08:36:10Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-17T08:34:12Z |
---
license: creativeml-openrail-m
---
|
AriesChen/GeoLLM
|
AriesChen
| 2023-06-17T08:32:06Z | 195 | 3 |
transformers
|
[
"transformers",
"pytorch",
"chatglm",
"feature-extraction",
"custom_code",
"region:us"
] |
feature-extraction
| 2023-06-17T08:30:04Z |
# GeoLLM
**Large Language Model for Geology**
Large language models are used to organize geology-related knowledge (geology, geophysics, geophysical logging, etc.). This version uses the [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B) base model and fine-tunes it using P-tuning.
---
### Sedimentology
Sedimentology, the study of sedimentary rocks and the processes by which they are formed, includes and is related to a large number of phenomena. Sedimentology includes the five fundamental processes defined by the term sediaentation --weathering, erosion, transportation, deposition and diagenesis.
**Datasets๏ผ**ใๆฒ็งฏๅฒฉ็ณๅญฆ๏ผ็ฌฌๅ็๏ผใ ๆฑ็ญฑๆ
**Model๏ผ** ChatGLM-6B
**Fine-tuning๏ผ** P-Tuning v2
**Before fine-tuning**
```
response, history = model.chat(tokenizer, "ไปไนๆฏๆฒ็งฏๅฒฉ็ณๅญฆ๏ผ", history=[])
response
ๆฒ็งฏๅฒฉ็ณๅญฆๆฏไธ้จ็ ็ฉถๆฒ็งฏๅฒฉ็ๅฝขๆใ็ปๆๅๆๅ็ๅญฆ็ง๏ผไธป่ฆๅ
ณๆณจๅฐ็่กจ้ขไธไธๅๆกไปถไธ็ๆฒ็งฏ่ฟ็จๅๅฒฉ็ณๅฝขๆๆบๅถ๏ผๅ
ๆฌๅฒฉๆตๆฒ็งฏใๅฐๅทๆฒ็งฏใๆดชๆฐดๆฒ็งฏใๆตทๆดๆฒ็งฏ็ญใๆฒ็งฏๅฒฉ็ณๅญฆๅจๅฐ่ดจๅญฆใๅฐ็็ฉ็ๅญฆใๅฐ็ๅๅญฆใๆๆ็งๅญฆ็ญ้ขๅ้ฝๆๅนฟๆณๅบ็จ๏ผๅ ไธบๆฒ็งฏๅฒฉๆฏ่ฎธๅค่ช็ถๅไบบๅทฅๅฐ่ดจๅทฅ็จ็ๅบ็ก๏ผๅฆๅฐ่ดจๅๆขใ็ฟไบง่ตๆบๅผๅใๅๆจๅทฅ็จใ็ฏๅข็งๅญฆ็ญใๆฒ็งฏๅฒฉ็ณๅญฆ็็ ็ฉถๅฏน่ฑกๅ
ๆฌๆฒ็งฏๅฒฉ็ๅฐ่ดจ็นๅพใๆๅๅๆ้ ็นๅพ๏ผไปฅๅๆฒ็งฏๅฒฉไธๅฐๅฃณใๅฒฉๆตๅๅ่ดจๅฒฉ็ๅ
ณ็ณปใ็ ็ฉถๆนๆณๅ
ๆฌๆฒ็งฏๅฒฉๅๆใๅฒฉ็ธๅญฆๅๆใๅฒฉๆตๅจๅๅญฆๅๆ็ญใ
```
**After fine-tuning**
```
response, history = model.chat(tokenizer, "ไปไนๆฏๆฒ็งฏๅฒฉ็ณๅญฆ๏ผ", history=[])
response
ๆฒ็งฏๅฒฉ็ณๅญฆๆฏ็ ็ฉถๆฒ็งฏๅฒฉ็็ฉ่ดจๆๅใ็ปๆๆ้ ใๅฒฉ็ณ็ฑปๅใๆฒ็งฏ็ฉๆฒ็งฏไฝ็จๅๆฒ็งฏ็ฉ่ดจๅฝขๆ็ฏๅขไปฅๅๆฒ็งฏๅฒฉๅๅธ่งๅพ็ไธ้จ็งๅญฆใ
```
**Error Analysis:** We meticulously refined the model by approximately 500 entries from academic textbooks, subsequently applying P-Tuning v2 for optimization. Detailed control of parameters was not conducted for the time being. Given the scarcity of the training data and the fine-tuning parameters, the outcomes might exhibit some irregularities.
**Results Analysis:** It is evident that the fine-tuned model shows enhanced reliability(more precise and concise) when providing answers within specialized knowledge domains. Moving forward, we will persist in enriching our training data and optimizing our fine-tuning methodologies in order to yield superior results.
---
### TODO
1. Geophysical Exploration
2. Geophysical logging
3. Petroleum Geology
etc...
---
### Related Resources
1. [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B): ChatGLM-6B is an open bilingual language model based on General Language Model (GLM) framework, with 6.2 billion parameters.
|
SM16/TreeClassifier
|
SM16
| 2023-06-17T08:15:11Z | 218 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-17T07:27:25Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: TreeClassifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# TreeClassifier
Autogenerated by HuggingPics๐ค๐ผ๏ธ
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Pepper Tree

#### Weeping Willow

|
musabg/mt5-xl-tr-summarization
|
musabg
| 2023-06-17T07:25:20Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"tr",
"dataset:musabg/wikipedia-tr-summarization",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-08T16:24:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- musabg/wikipedia-tr-summarization
metrics:
- rouge
model-index:
- name: mt5-xl-tr-summarization
results:
- task:
name: Summarization
type: summarization
dataset:
name: musabg/wikipedia-tr-summarization
type: musabg/wikipedia-tr-summarization
split: validation
metrics:
- name: Rouge1
type: rouge
value: 56.4468
language:
- tr
---
# mT5-Xl Turkish Summarization
This model is a fine-tuned version of [google/mt5-xl](https://huggingface.co/google/mt5-xl) on the musabg/wikipedia-tr-summarization dataset.
This can be used with HF summarization pipeline.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Eval results
It achieves the following results on the evaluation set:
- Loss: 0.5676
- Rouge1: 56.4468
- Rouge2: 41.3258
- Rougel: 48.1909
- Rougelsum: 48.4284
- Gen Len: 75.9265
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 1.13.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
alsonlai/test
|
alsonlai
| 2023-06-17T07:23:18Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T07:22:43Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: test
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="alsonlai/test", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Irgendsoeine/FaceTheVote3
|
Irgendsoeine
| 2023-06-17T07:10:58Z | 4 | 0 |
tf-keras
|
[
"tf-keras",
"mobilenet",
"image-classification",
"region:us"
] |
image-classification
| 2023-06-17T06:56:45Z |
---
pipeline_tag: image-classification
---
|
kjiwon1222/my_awesome_eli5_clm-model
|
kjiwon1222
| 2023-06-17T06:54:34Z | 217 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-17T06:32:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8621 | 1.0 | 1137 | 3.7690 |
| 3.7782 | 2.0 | 2274 | 3.7533 |
| 3.7245 | 3.0 | 3411 | 3.7506 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
aga3134/poca-SoccerTwos
|
aga3134
| 2023-06-17T06:48:55Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-06-17T06:48:14Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: aga3134/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
octipuw/policy_gradient-cartpole-v1
|
octipuw
| 2023-06-17T06:40:28Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T05:35:35Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: policy_gradient-cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 166.70 +/- 18.03
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
vedu/bart-large-perturbed
|
vedu
| 2023-06-17T06:21:37Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"rust",
"bart",
"feature-extraction",
"en",
"arxiv:1910.13461",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-06-16T20:22:31Z |
---
license: apache-2.0
language: en
---
# BART (large-sized model)
## Model description
BART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).
Weights shared here are effectively from facebook/bart-large but with added noise for BOS embedding to assist the finetuning.
## Intended uses & limitations
There have been quite a few issues related to finetuning BART for text generation, and this repo implements solution discussed in [#15559](https://github.com/huggingface/transformers/issues/15559).
Particularly adding some noise to pre-trained model's BOS embedding. This seems to solve the problem of endless BOS generation for a finetuned BART model.
You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=bart) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import BartTokenizer, BartModel
tokenizer = BartTokenizer.from_pretrained('vedu/bart-large-perturbed')
model = BartModel.from_pretrained('vedu/bart-large-perturbed')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1910-13461,
author = {Mike Lewis and
Yinhan Liu and
Naman Goyal and
Marjan Ghazvininejad and
Abdelrahman Mohamed and
Omer Levy and
Veselin Stoyanov and
Luke Zettlemoyer},
title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language
Generation, Translation, and Comprehension},
journal = {CoRR},
volume = {abs/1910.13461},
year = {2019},
url = {http://arxiv.org/abs/1910.13461},
eprinttype = {arXiv},
eprint = {1910.13461},
timestamp = {Thu, 31 Oct 2019 14:02:26 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
pigeon01/sungju-finetuned-zh-to-ko1
|
pigeon01
| 2023-06-17T05:47:05Z | 228 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-06-17T05:12:16Z |
---
license: mit
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: sungju-finetuned-zh-to-ko1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sungju-finetuned-zh-to-ko1
This model is a fine-tuned version of [alirezamsh/small100](https://huggingface.co/alirezamsh/small100) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0467
- Bleu: 10.2096
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
sunflowermarshmallows/dqn-SpaceInvadersNoFrameskip-v4
|
sunflowermarshmallows
| 2023-06-17T05:25:16Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T05:24:36Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 629.00 +/- 184.89
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sunflowermarshmallows -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sunflowermarshmallows -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga sunflowermarshmallows
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
nolanaatama/rngrndrvcv800pchsrthysttylrsvrsn
|
nolanaatama
| 2023-06-17T04:33:50Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-31T04:13:59Z |
---
license: creativeml-openrail-m
---
|
ALPHONSE28/EQUIPO06SEMANA09
|
ALPHONSE28
| 2023-06-17T04:33:00Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-16T06:38:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: EQUIPO06SEMANA09
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EQUIPO06SEMANA09
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2161
- Accuracy: 0.9233
- F1: 0.9514
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
fgeyer/reinforce-CartPole-v1
|
fgeyer
| 2023-06-17T04:04:51Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T03:59:14Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 1000.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
aozora-esther/aozora-esther
|
aozora-esther
| 2023-06-17T04:00:52Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-06-17T04:00:52Z |
---
license: bigscience-openrail-m
---
|
2022happy/swin-tiny-patch4-window7-224-finetuned-eurosat
|
2022happy
| 2023-06-17T03:51:48Z | 245 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:cifar10",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-15T13:46:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cifar10
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: cifar10
type: cifar10
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.97
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the cifar10 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0893
- Accuracy: 0.97
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5255 | 1.0 | 351 | 0.1262 | 0.9596 |
| 0.3808 | 2.0 | 703 | 0.1031 | 0.9652 |
| 0.3268 | 2.99 | 1053 | 0.0893 | 0.97 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
nolanaatama/dmnslyrkmtsnybnmstyllr
|
nolanaatama
| 2023-06-17T02:31:22Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-17T02:27:58Z |
---
license: creativeml-openrail-m
---
|
Atnafu/amhric_xlmr-small
|
Atnafu
| 2023-06-17T02:23:50Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-17T02:17:40Z |
---
license: afl-3.0
tags:
- generated_from_trainer
model-index:
- name: amh_small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amh_small
This model is a fine-tuned version of [Davlan/afro-xlmr-small](https://huggingface.co/Davlan/afro-xlmr-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2386
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
breadlicker45/MuseRift
|
breadlicker45
| 2023-06-17T02:11:29Z | 170 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rwkv",
"text-generation",
"dataset:breadlicker45/musenet-encoders-40k",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-15T01:22:00Z |
---
datasets:
- breadlicker45/musenet-encoders-40k
---
|
paulahugging/MABEPA_2
|
paulahugging
| 2023-06-17T01:48:24Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-06-16T15:43:20Z |
El objetivo Modelo parte de un modelo Bert Base con el objetivo de identificar la similitud semรกntica entre dos oraciones (o Semantic Textual Similarity: โSTSโ), es decir, medir quรฉ tan parecidos son dos documentos. Dicho modelo serรก a travรฉs de una red neuronal siamesa, que implica usar la misma red, con idรฉnticos parรกmetros, para procesar la premisa y la hipรณtesis. โLa tarea STS estรก motivada por la observaciรณn de que modelar con precisiรณn la similitud de significado de las oraciones es un problema fundamental de comprensiรณn del lenguaje relevante para numerosas aplicaciones, incluyendo: traducciรณn automรกtica (MT), resumen, generaciรณn, pregunta respuesta (QA), calificaciรณn de respuestas cortas, semรกntica, sistemas de bรบsqueda, diรกlogo y conversaciรณn.โ (Cera Et al, 2017, p. 1).
Datos El dataset elegido fue XNLI en espaรฑol. El mismo contiene los campos de 'premise', 'hypothesis' y 'labelโ, donde los dos primeros campos son oraciones o cadenas de texto mientras que el tercero es la similitud semรกntica entre ambas con la siguiente codificaciรณn: 'entailment': 0, 'neutral': 1, 'contradiction': 2 El mismo estรก compuesto por tres dataset: TRAINING, con 392.702 datos; TEST, con 5.010 datos; VALIDATION, con 2.490 datos.
Ademรกs se utiliza un vocabulario en espaรฑol que contiene alrededor de 31.000 palabras, incluyendo los siguientes caracteres especiales: "[MASK]", "[PAD]", "[EOS]","[UNK]","[CLS]","[SEP]" que se encuentran en las primeras posiciones del vocabulario. Dicho vocabulario surge del modelo de Huggigface cuyo model_name es "dccuchile/bert-base-spanish-wwm-uncased".
Mรฉtodo
Tokenizaciรณn En primer lugar importamos AutoTokenizer y obtenemos el tokenizador del modelo definido anteriormente. El mismo, al tokenizar adicionalmente de convertir los tokens o palabras en su ID del vocabulario, le incorpora al inicio el id del carรกcter especial โCLSโ y al final el โSEPโ. Ademรกs fijamos como parรกmetro la longitud mรกxima del modelo (tokenizer.model_max_length) esto genera que corte las premisas y las hipotesis si son mas largas y que las complete con padding si son mas cortas hasta completar la longitud deseada (con โPADโ).. Notamos que este tokenizador contiene funciones como las del itos y el stoi ya generadas.
Procedemos a tokenizar el dataset a utilizando la funciรณn map, tanto para la premisa como la hipรณtesis.
Armado de Batches Con la tokenizacion realizada procedemos a separar los Batches, para lo cual usamos el dataloader de torch. El resultado serรกn batches de tamaรฑo 32 para el dataset de train, y 16 tanto para el de validaciรณn como para el de test. Sus dimensiones son el tamaรฑo de cada batch x la cantidad de elementos. En el caso de la premisa y la hipรณtesis la cantidad de elementos serรก el largo utilizado para la tokenizaciรณn mientras que en el caso del label, al ser รบnico, la dimension serรก del tamaรฑo del batch x 1. Asimismo incorporamos a los batches el attention_mask de la premisa y de la hipรณtesis.
Modelo base BERT es una red pre-entrenada de transformadores (...). El input de BERT consiste en las dos oraciones separadas por un token especial [SEP]. (...) yla salida se pasa a una funciรณn de regresiรณn simple para derivar la etiqueta final. (Reimers and Gurevich, 2019, p. 2). Sobre este modelo base, se realizรณ el finetuning de nuestra red, basรกndonos en el siguiente diagrama (Reimers and Gurevich, 2019, p. 3).:
Es decir que lo que haremos serรก pasar la premisa y la hipรณtesis por una BERT, obteniendo luego un pooler output para cada una de ellas (โuโ y โvโ). Luego se concatenan, junto con el mรณdulo de la diferencia, y ese resultado es pasado por la capa lineal obteniendo 3 resultados, que serรกn las probabilidades asociadas a cada label. Se procediรณ a entrenar la red con el dataset de train, utilizando como funciรณn de pรฉrdida la entropรญa cruzada, y luego se procediรณ a validar el modelo. Los resultados se exponen en la prรณxima secciรณn.
En este caso el modelo tiene dos epocas completas de entrenamiento
|
zhangjian94cn/Taxi-v3
|
zhangjian94cn
| 2023-06-17T01:33:35Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-17T01:33:26Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="zhangjian94cn/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
UofA-LINGO/text-to-triplets-explanation-v2
|
UofA-LINGO
| 2023-06-17T00:41:02Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-06-17T00:38:48Z |
---
license: mit
---
LoRA weights for `LLaMA-7B`
Trained on 'taesiri/webnlg-triplets-explanation-v1' for 4 epochs.
Command:
```
WORLD_SIZE=2 CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node=2 --master_port=1234 finetune.py --base_model='decapoda-research/llama-7b-hf' --data_path 'taesiri/webnlg-triplets-explanation-v1' --num_epochs=4 --cutoff_len=512 --group_by_length --lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' --lora_r=8 --micro_batch_size=8 --batch_size=32
```
|
UofA-LINGO/text-to-triplets-explanation-v1
|
UofA-LINGO
| 2023-06-17T00:39:59Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-06-16T22:26:38Z |
---
license: mit
---
LoRA weights for `LLaMA-7B`
Trained on 'taesiri/webnlg-triplets-explanation-v1' for 2 epochs.
|
arsalsyed/distilgpt2-finetuned-wikitext2
|
arsalsyed
| 2023-06-17T00:14:20Z | 135 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-16T23:41:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7596 | 1.0 | 2334 | 3.6651 |
| 3.6543 | 2.0 | 4668 | 3.6468 |
| 3.6024 | 3.0 | 7002 | 3.6420 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
H4nan/dqn-SpaceInvadersNoFrameskip-v4
|
H4nan
| 2023-06-16T23:54:53Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-23T18:30:15Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 537.00 +/- 181.32
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga H4nan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga H4nan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga H4nan
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Ioanaaaaaaa/bert-base-uncased-with-preprocess-finetuned-emotion-5-epochs-5e-05-lr-0.1-weight_decay
|
Ioanaaaaaaa
| 2023-06-16T23:47:54Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-16T23:30:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-with-preprocess-finetuned-emotion-5-epochs-5e-05-lr-0.1-weight_decay
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.941
- name: F1
type: f1
value: 0.9411169346964399
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-with-preprocess-finetuned-emotion-5-epochs-5e-05-lr-0.1-weight_decay
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2591
- Accuracy: 0.941
- F1: 0.9411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0799 | 1.0 | 250 | 0.1898 | 0.9375 | 0.9377 |
| 0.0516 | 2.0 | 500 | 0.2290 | 0.938 | 0.9383 |
| 0.0386 | 3.0 | 750 | 0.2107 | 0.9415 | 0.9419 |
| 0.0195 | 4.0 | 1000 | 0.2607 | 0.9435 | 0.9433 |
| 0.0149 | 5.0 | 1250 | 0.2591 | 0.941 | 0.9411 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
sam34738/mBERT
|
sam34738
| 2023-06-16T23:44:39Z | 186 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-16T20:24:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: mbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9812
- Accuracy: 0.6583
- F1: 0.6948
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-05
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.749 | 1.0 | 2100 | 0.7068 | 0.4994 | 0.0131 |
| 0.7707 | 2.0 | 4200 | 0.9812 | 0.6583 | 0.6948 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
TacLucas/Test
|
TacLucas
| 2023-06-16T23:07:19Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-06-16T23:06:14Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ghze/Taxi_v3
|
ghze
| 2023-06-16T23:00:53Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T23:00:48Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi_v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ghze/Taxi_v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ghze/Taxi
|
ghze
| 2023-06-16T22:59:16Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T22:59:09Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ghze/Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
amjadfqs/finalProject
|
amjadfqs
| 2023-06-16T22:28:48Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-15T17:30:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
model-index:
- name: finalProject
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9890023566378633
- name: Precision
type: precision
value: 0.9894345375382527
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finalProject
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0411
- Accuracy: 0.9890
- F1 Score: 0.9892
- Precision: 0.9894
- Sensitivity: 0.9891
- Specificity: 0.9972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score | Precision | Sensitivity | Specificity |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------:|:-----------:|:-----------:|
| 0.3384 | 1.0 | 30 | 0.2387 | 0.9144 | 0.9163 | 0.9197 | 0.9146 | 0.9781 |
| 0.1608 | 2.0 | 60 | 0.1635 | 0.9466 | 0.9476 | 0.9485 | 0.9474 | 0.9865 |
| 0.0953 | 3.0 | 90 | 0.0915 | 0.9698 | 0.9703 | 0.9706 | 0.9706 | 0.9924 |
| 0.0573 | 4.0 | 120 | 0.1125 | 0.9607 | 0.9617 | 0.9634 | 0.9621 | 0.9901 |
| 0.0335 | 5.0 | 150 | 0.0536 | 0.9827 | 0.9831 | 0.9837 | 0.9826 | 0.9957 |
| 0.0185 | 6.0 | 180 | 0.0543 | 0.9827 | 0.9830 | 0.9837 | 0.9825 | 0.9957 |
| 0.0226 | 7.0 | 210 | 0.0478 | 0.9859 | 0.9861 | 0.9866 | 0.9856 | 0.9965 |
| 0.0131 | 8.0 | 240 | 0.0468 | 0.9843 | 0.9846 | 0.9847 | 0.9846 | 0.9961 |
| 0.0087 | 9.0 | 270 | 0.0411 | 0.9890 | 0.9892 | 0.9894 | 0.9891 | 0.9972 |
| 0.0043 | 10.0 | 300 | 0.0376 | 0.9886 | 0.9888 | 0.9890 | 0.9887 | 0.9971 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
devonho/my_awesome_opus_books_model
|
devonho
| 2023-06-16T22:28:30Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus100",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-06T07:28:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus100
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus100
type: opus100
config: en-ja
split: test
args: en-ja
metrics:
- name: Bleu
type: bleu
value: 23.8215
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus100 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4506
- Bleu: 23.8215
- Gen Len: 4.6055
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-------:|:---------------:|:-------:|:-------:|
| 0.4468 | 1.0 | 500000 | 0.4585 | 23.9023 | 4.705 |
| 0.4397 | 2.0 | 1000000 | 0.4506 | 23.8215 | 4.6055 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
maren-hugg/xlm-roberta-base-finetuned-panx-en-custom
|
maren-hugg
| 2023-06-16T21:56:26Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-12T06:49:39Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- precision
- recall
- accuracy
model-index:
- name: xlm-roberta-base-finetuned-panx-en-custom
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en-custom
This model is a fine-tuned version of [maren-hugg/xlm-roberta-base-finetuned-panx-en](https://huggingface.co/maren-hugg/xlm-roberta-base-finetuned-panx-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1045
- F1: 0.8782
- Precision: 0.8496
- Recall: 0.9088
- Accuracy: 0.9754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.886597454037411e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|:------:|:--------:|
| 0.128 | 0.75 | 24 | 0.1087 | 0.8514 | 0.8299 | 0.8740 | 0.9713 |
| 0.074 | 1.5 | 48 | 0.1006 | 0.8637 | 0.8505 | 0.8773 | 0.9750 |
| 0.0506 | 2.25 | 72 | 0.0987 | 0.8728 | 0.8587 | 0.8872 | 0.9749 |
| 0.0393 | 3.0 | 96 | 0.1045 | 0.8782 | 0.8496 | 0.9088 | 0.9754 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Enterprize1/ppo-LunarLander-v2
|
Enterprize1
| 2023-06-16T21:45:24Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T21:45:00Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 242.78 +/- 66.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
stanford-crfm/music-small-ar-inter-100k
|
stanford-crfm
| 2023-06-16T21:28:37Z | 182 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"arxiv:2306.08620",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-06-05T00:04:27Z |
---
license: apache-2.0
---
This is a Small (112M parameter) Transformer trained for 100k steps on interarrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/).
# References for the Anticipatory Music Transformer
The Anticipatory Music Transformer paper is available on [ArXiv](http://arxiv.org/abs/2306.08620).
The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf).
Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/).
See the accompanying [blog post](https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html) for additional discussion of this model.
|
stanford-crfm/music-small-ar-800k
|
stanford-crfm
| 2023-06-16T21:28:12Z | 183 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"arxiv:2306.08620",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-06-05T00:01:12Z |
---
license: apache-2.0
---
This is a Small (128M parameter) Transformer trained for 800k steps on arrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/).
# References for the Anticipatory Music Transformer
The Anticipatory Music Transformer paper is available on [ArXiv](http://arxiv.org/abs/2306.08620).
The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf).
Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/).
See the accompanying [blog post](https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html) for additional discussion of this model.
|
FALLENSTAR/Volvo850LoRa
|
FALLENSTAR
| 2023-06-16T21:28:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-11T17:03:34Z |










|
stanford-crfm/music-small-ar-100k
|
stanford-crfm
| 2023-06-16T21:27:39Z | 184 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"arxiv:2306.08620",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-06-04T23:58:03Z |
---
license: apache-2.0
---
This is a Small (128M parameter) Transformer trained for 100k steps on arrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/).
# References for the Anticipatory Music Transformer
The Anticipatory Music Transformer paper is available on [ArXiv](http://arxiv.org/abs/2306.08620).
The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf).
Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/).
See the accompanying [blog post](https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html) for additional discussion of this model.
|
stanford-crfm/music-small-100k
|
stanford-crfm
| 2023-06-16T21:26:29Z | 181 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"arxiv:2306.08620",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-06-04T23:26:52Z |
---
license: apache-2.0
---
This is a Small (128M parameter) Transformer trained for 100k steps on arrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/). This model was trained with anticipation.
# References for the Anticipatory Music Transformer
The Anticipatory Music Transformer paper is available on [ArXiv](http://arxiv.org/abs/2306.08620).
The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf).
Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/).
See the accompanying [blog post](https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html) for additional discussion of this model.
|
stanford-crfm/music-medium-800k
|
stanford-crfm
| 2023-06-16T21:25:52Z | 572 | 4 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"arxiv:2306.08620",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-06-05T00:17:20Z |
---
license: apache-2.0
---
This is a Medium (360M parameter) Transformer trained for 800k steps on arrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/). This model was trained with anticipation.
# References for the Anticipatory Music Transformer
The Anticipatory Music Transformer paper is available on [ArXiv](http://arxiv.org/abs/2306.08620).
The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf).
Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/).
See the accompanying [blog post](https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html) for additional discussion of this model.
|
jondurbin/airoboros-65b-gpt4-1.2-peft
|
jondurbin
| 2023-06-16T21:01:26Z | 0 | 0 | null |
[
"dataset:jondurbin/airoboros-gpt4-1.2",
"license:other",
"region:us"
] | null | 2023-06-14T09:11:36Z |
---
license: other
datasets:
- jondurbin/airoboros-gpt4-1.2
---
peft weights of https://hugginface.co/jondurbin/airoboros-65b-gpt4-1.2, see that card for details
|
Schnitzl/detr-resnet-50_finetuned_cppe5
|
Schnitzl
| 2023-06-16T20:54:42Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cppe-5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-06-16T17:17:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cppe-5
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.13.0
- Tokenizers 0.13.3
|
crlandsc/bsrnn-bass
|
crlandsc
| 2023-06-16T20:24:33Z | 0 | 1 | null |
[
"audio source separation",
"music demixing",
"band-split recurrent neural network",
"bsrnn",
"spectrogram",
"bass",
"region:us"
] | null | 2023-06-16T20:16:53Z |
---
tags:
- audio source separation
- music demixing
- band-split recurrent neural network
- bsrnn
- spectrogram
- bass
---
# Model Card for bsrnn-bass
Bass model for [Music-Demixing-with-Band-Split-RNN](https://github.com/crlandsc/Music-Demixing-with-Band-Split-RNN).
|
sngsfydy/resnet-50-finetuned-eurosat
|
sngsfydy
| 2023-06-16T20:17:05Z | 209 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"resnet",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-16T19:14:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: resnet-50-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-eurosat
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0706
- Accuracy: 0.5152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6069 | 0.99 | 20 | 1.5839 | 0.3879 |
| 1.5395 | 1.98 | 40 | 1.4860 | 0.5485 |
| 1.4321 | 2.96 | 60 | 1.3500 | 0.5364 |
| 1.3292 | 4.0 | 81 | 1.1826 | 0.5212 |
| 1.233 | 4.99 | 101 | 1.0706 | 0.5152 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
blevlabs/alpaca-7b
|
blevlabs
| 2023-06-16T20:16:19Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | 2023-06-15T16:01:29Z |
Found. Redirecting to https://cdn-lfs.hf.co/repos/d1/e7/d1e74b32fb5b1cbe2ba63f424b0731e786a5ae300beb41ac103060925f9becff/d17913b0f97b6c92903cbf2c6d8ff9e412644769ba8ea848fd4dc4246a38c8d0?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27README.md%3B+filename%3D%22README.md%22%3B&response-content-type=text%2Fmarkdown&Expires=1739044824&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTczOTA0NDgyNH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5oZi5jby9yZXBvcy9kMS9lNy9kMWU3NGIzMmZiNWIxY2JlMmJhNjNmNDI0YjA3MzFlNzg2YTVhZTMwMGJlYjQxYWMxMDMwNjA5MjVmOWJlY2ZmL2QxNzkxM2IwZjk3YjZjOTI5MDNjYmYyYzZkOGZmOWU0MTI2NDQ3NjliYThlYTg0OGZkNGRjNDI0NmEzOGM4ZDA%7EcmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qJnJlc3BvbnNlLWNvbnRlbnQtdHlwZT0qIn1dfQ__&Signature=sUzC3YAwA35JO1%7EsqsOl3QIpzizB%7EEv%7EXYdYit3C5o8V8Suda2NcVf2R2yyjugIu%7EIl4lNae7CS-CFdgqk7b%7Eo86c3g3cLEPoPSpj-7li1vD14C0df9YB6OzS26xfSPL1mxTVlK3NdDAeGXwlIcfEJHK2xJeMYL9Yw5J0IZgdyffOdyAw2GMaQdoQ2puzRSjGJyykxQ7ESbDV135z-qhCWsn1XiKFlXKM2xrx7K1nZBcEmNKd9nwVWSWoh5XB69c8dOvT91vTel6l15B1aAJzTRYyQfJEv0vI0m7Vuf8kXSHNP1Unl8bNFulxPAIaOmw5QjTnT4yUbnuwJlCImU74A__&Key-Pair-Id=K3RPWS32NSSJCE
|
FALLENSTAR/CedricGloriaLoRa
|
FALLENSTAR
| 2023-06-16T20:10:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-09T20:58:56Z |
### Model Description
First of all, it's LoRa. It is based on my favorite Nissan Cedric/Gloria Y31 Hardtop from the years '87-91. It is a test model, so it has defects. I don't remember how many samples and epochs were used in it... But, with some of the checkpoints it turns out very similar and funny.
The best images I was able to get with this LoRa were at these settings:
Steps: 25
Sampler: DPM++ SDE Karras,
CFG scale: 6.5
and with LoRa strength 0.8-1
### Results













|
GEMCorp/q-FrozenLake-v1-4x4-noSlippery
|
GEMCorp
| 2023-06-16T19:51:18Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T19:51:12Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="GEMCorp/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ChristineCheng/my_awesome_eli5_clm-model
|
ChristineCheng
| 2023-06-16T19:49:19Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-16T19:33:04Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ChristineCheng/my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ChristineCheng/my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.7347
- Validation Loss: 3.7399
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.9119 | 3.7667 | 0 |
| 3.7942 | 3.7493 | 1 |
| 3.7347 | 3.7399 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
SSSSSSSSSSSJJJJJJJJJJJJJ/my_awesome_eli5_clm-model
|
SSSSSSSSSSSJJJJJJJJJJJJJ
| 2023-06-16T19:44:19Z | 179 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-16T19:13:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8765 | 1.0 | 1120 | 3.7555 |
| 3.7769 | 2.0 | 2240 | 3.7368 |
| 3.7331 | 3.0 | 3360 | 3.7341 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
crlandsc/tiny-audio-diffusion-snares
|
crlandsc
| 2023-06-16T19:25:10Z | 3 | 1 | null |
[
"audio",
"diffusion",
"waveform diffusion",
"audio diffusion",
"unet",
"region:us"
] | null | 2023-06-10T15:20:00Z |
---
tags:
- audio
- diffusion
- waveform diffusion
- audio diffusion
- unet
---
# Model Card for tiny-audio-diffusion-snares
Snare drum model for tiny-audio-diffusion. Use with [tiny-audio-diffusion](https://github.com/crlandsc/tiny-audio-diffusion) repo to generate snare drum samples.
|
ananay/kneearch
|
ananay
| 2023-06-16T19:17:59Z | 22 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-16T19:05:11Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### kneearch Dreambooth model trained by ananay with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
AustinCarthy/OnlyPhishGPT2_subdomain_100KP_BFall_fromB_90K_topP_0.75_ratio5
|
AustinCarthy
| 2023-06-16T19:17:42Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-16T15:49:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: OnlyPhishGPT2_subdomain_100KP_BFall_fromB_90K_topP_0.75_ratio5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OnlyPhishGPT2_subdomain_100KP_BFall_fromB_90K_topP_0.75_ratio5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_OnlyPhishGPT2_using_benigh_200K_top_p_0.75 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0192
- Accuracy: 0.9978
- F1: 0.9767
- Precision: 0.9994
- Recall: 0.955
- Roc Auc Score: 0.9775
- Tpr At Fpr 0.01: 0.9632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0057 | 1.0 | 35625 | 0.0113 | 0.9979 | 0.9779 | 0.9954 | 0.961 | 0.9804 | 0.9518 |
| 0.0035 | 2.0 | 71250 | 0.0150 | 0.9975 | 0.9726 | 0.9983 | 0.9482 | 0.9741 | 0.95 |
| 0.0011 | 3.0 | 106875 | 0.0175 | 0.9975 | 0.9727 | 0.9994 | 0.9474 | 0.9737 | 0.9554 |
| 0.0009 | 4.0 | 142500 | 0.0160 | 0.9979 | 0.9778 | 0.9990 | 0.9576 | 0.9788 | 0.9618 |
| 0.0 | 5.0 | 178125 | 0.0192 | 0.9978 | 0.9767 | 0.9994 | 0.955 | 0.9775 | 0.9632 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
YoneShiro/SpaceInvadersNoFrameskip-v4
|
YoneShiro
| 2023-06-16T19:14:05Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T19:13:20Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 708.00 +/- 250.51
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga YoneShiro -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga YoneShiro -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga YoneShiro
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Boristss/modellunarlander
|
Boristss
| 2023-06-16T19:13:23Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-16T19:12:58Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 258.44 +/- 21.50
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JvThunder/ppo-Pyramids
|
JvThunder
| 2023-06-16T18:37:51Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-16T18:37:41Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: JvThunder/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
sambanovasystems/starcoder-toolbench
|
sambanovasystems
| 2023-06-16T18:23:22Z | 23 | 4 |
transformers
|
[
"transformers",
"pytorch",
"gpt_bigcode",
"text-generation",
"arxiv:2305.16504",
"arxiv:2305.06161",
"license:bigcode-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-24T04:41:14Z |
---
license: bigcode-openrail-m
---
# starcoder-toolbench
<!-- Provide a quick summary of what the model is/does. -->
starcoder-toolbench is a 15 billion parameter model used for api based action generation. It is instruction tuned from [starcoder](https://huggingface.co/bigcode/starcoder) on api based action generation datasets.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [SambaNova Systems](https://sambanova.ai/)
- **Model type:** Language Model
- **Language(s):** English
- **License:** bigcode-openrail-m
- **Finetuned from model:** [starcoder](https://huggingface.co/bigcode/starcoder)
### Basic Information
<!-- Provide the basic links for the model. -->
- **Paper**: [link](https://arxiv.org/abs/2305.16504)
- **Github**: [link](https://github.com/sambanova/toolbench)
## Uses
<details>
<summary>Click to expand</summary>
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model is intended for commercial and research use.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
starcoder-toolbench should NOT be used for purpose other than API based action generation.
</details>
---
## How to Get Started with the Model
<details>
<summary>Click to expand</summary>
### Loading in model with Huggingface
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/starcoder-toolbench")
model = AutoModelForCausalLM.from_pretrained("sambanovasystems/starcoder-toolbench", device_map="auto", torch_dtype="auto")
```
### Example Prompts To Try in GPU Tutorial
Prompt 1:
```
I have the following set of API:\n\n# To set the maximum commute time in minute to your office location, assuming the office location is already defined\nAPI.set_max_commute_time(value: int)\n\n# To set the maximum home size in square feet\nAPI.set_max_square_feet(value: int)\n\n# To set the minimum home price in dollars\nAPI.set_min_price(value: int)\n\n# To set the number of garage(s)\nAPI.set_num_garages(value: int)\n\n# To set home types for search. For home buying, home_types choices are: \"House\", \"Townhouse\", \"Condo\", \"Land\", \"Multi-family\", \"Mobile\", \"Co-op\"; for home renting, home_types choices are: \"House\", \"Townhouse\", \"Condo\", \"Apartment\".\nAPI.select_home_type(home_types: List[str])\n\n# To set the number of balconies\nAPI.set_num_balconies(value: int)\n\n# Submit criterion to get search results. This function should be called after setting all the criterion.\nAPI.search()\n\n# To set the floor number\nAPI.set_floor_number(value: int)\n\n# To set the number of bedroom(s)\nAPI.set_num_beds(value: int)\n\n# To set the number of swimming pool(s)\nAPI.set_num_swimming_pools(value: int)\n\n# To set the maximum home price in dollars\nAPI.set_max_price(value: int)\n\n# To specify whether to search homes for buying or renting. 'value' can be chosen from ['buy', 'rent']. This function must be called after setting the location and before setting any other criteria.\nAPI.set_buy_or_rent(value: str)\n\n# To set the number of bathroom(s)\nAPI.set_num_baths(value: float)\n\n# To set the location for the search area. This function must be called before setting any criteria.\nAPI.set_location(value: string)\n\n# To set the minimum home size in square feet\nAPI.set_min_square_feet(value: int)\n\n-------------\n\nTask: Looking for homes to rent in Santa Clarita with a price range between $110000 and $1753000, a minimum of 1700 square feet, at least 2 balconies, and 3.5 bathrooms.\nAction:\n
```
Prompt 2:
```
I have the following set of API:\n\n# To set the location for hotel search, given a Loc object. This function must be called if booking type is 'hotels' or 'both'.\nAPI.set_hotel_location(Loc)\n\n# To set the number of hotel rooms to book.\nAPI.set_num_rooms(value)\n\n# To set the location for departure, given a Loc object. This function must be called if booking type is 'trip tickets' or 'both'.\nAPI.set_origin(Loc)\n\n# To select the transportation type from ['flight', 'train', 'bus', 'cruise']. This function must be called if booking type is 'trip tickets' or 'both'.\nAPI.select_transportation(transportation_type)\n\n# To set the return date of the trip, given a Date object. If booking type is 'both' and this function is not called explicitly, 'return_date' will be set to 'hotel_checkout_date' implicitly.\nAPI.set_return_date(Date)\n\n# To set the hotel check-in date, given a Date object. This function must be called if booking type is 'hotels' or 'both'.\nAPI.set_checkin_date(Date)\n\n# To define a date.\ndate = Date(month, day, year)\n\n# To set the departure date of the trip, given a Date object. This function must be called if booking type is 'trip tickets'. If booking type is 'both' and this function is not called explicitly, 'departure_date' will be set to 'hotel_checkin_date' implicitly.\nAPI.set_departure_date(Date)\n\n# To set the location for arrival, given a Loc object. This function must be called if booking type is 'trip tickets' or 'both'.\nAPI.set_destination(Loc)\n\n# To define a location of a given city 'City'.\nlocation = Loc('City')\n\n# To set maximum hotel room price.\nAPI.set_max_room_price(value)\n\n# To set minimum ticket price.\nAPI.set_min_ticket_price(value)\n\n# To select the booking type from ['hotels', 'trip tickets', 'both']. This function must be called before setting any criteria.\nAPI.select_booking_type(booking_type)\n\n# To set minimum hotel room price.\nAPI.set_min_room_price(value)\n\n# To set the number of child tickets to purchase.\nAPI.set_num_children(value)\n\n# To set the number of adult tickets to purchase.\nAPI.set_num_adults(value)\n\n# To select the hotel room type from ['King Bed', 'Queen Bed', 'Double', 'Luxury'].\nAPI.select_room_type(room_type)\n\n# To set maximum ticket price.\nAPI.set_max_ticket_price(value)\n\n# Submit criterion to get search results. This function should be called after setting all the criterion.\nAPI.search()\n\n# To set the hotel check-out date, given a Date object. This function must be called if booking type is 'hotels' or 'both'.\nAPI.set_checkout_date(Date)\n\n-------------\n\nTask: Looking to book 2 adult and 4 child tickets from Stockton to Baltimore by cruise, on 2023-07-29.\nAction:\n
```
</details>
---
## Training Details
<details>
<summary>Click to expand</summary>
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The training data is curated for the 8 tasks in ToolBench. See Appendix A of the [paper](https://arxiv.org/abs/2305.16504) for task details and Appendix C.1 for the training data curation details. In total, there are 9704 training samples, organized in all-shot format as described in Appendix C.2. Here is the [download link](https://drive.google.com/file/d/1lUatLGnSVhfy1uVIPEQ7qCoLtnCIXi2O/view?usp=sharing) to the training data.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
We trained starcoder-toolbench on 4 80GB A100 gpu's. We started from [starcoder](https://huggingface.co/bigcode/starcoder) and finetuned it on the dataset mentioned above.
### Hyperparameters
- Hardware: A100 GPU
- Optimizer: AdamW
- Grad accumulation: 1
- Epochs: 8
- Global Batch size: 16
- Batch tokens: 16 * 2048 = 32,768 tokens
- Learning Rate: 1e-5
- Learning Rate Scheduler: Fixed LR
- Weight decay: 0.1
</details>
## Acknowledgment
We would like to express our gratitude to the great work done in [StarCoder: may the source be with you!](https://arxiv.org/abs/2305.06161)
## Cite starcoder-toolbench
```
@misc{xu2023tool,
title={On the Tool Manipulation Capability of Open-source Large Language Models},
author={Qiantong Xu and Fenglu Hong and Bo Li and Changran Hu and Zhengyu Chen and Jian Zhang},
year={2023},
eprint={2305.16504},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.