modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-27 18:27:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-27 18:22:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
1aurent/q-Taxi-v3
|
1aurent
| 2023-07-10T13:25:59Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T13:02:36Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="1aurent/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Neko-Institute-of-Science/guanaco-unchained-33b-qlora
|
Neko-Institute-of-Science
| 2023-07-10T13:24:08Z | 0 | 3 | null |
[
"dataset:CheshireAI/guanaco-unchained",
"region:us"
] | null | 2023-07-10T00:10:04Z |
---
datasets:
- CheshireAI/guanaco-unchained
---
Let's see how this goes.
Training in 8 bit and at full context. Is 8bit even a qlora?
```
python qlora.py \
--model_name_or_path /UI/text-generation-webui/models/llama-30b \
--output_dir ./output/guanaco-33b \
--logging_steps 1 \
--save_strategy steps \
--data_seed 42 \
--save_steps 69 \
--save_total_limit 999 \
--per_device_eval_batch_size 1 \
--dataloader_num_workers 3 \
--group_by_length \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--do_eval false \
--do_mmlu_eval false \
--lora_r 64 \
--lora_alpha 16 \
--lora_modules all \
--bf16 \
--bits 8 \
--warmup_ratio 0.03 \
--lr_scheduler_type constant \
--gradient_checkpointing \
--gradient_accumulation_steps 32 \
--dataset oasst1 \
--source_max_len 2048 \
--target_max_len 2048 \
--per_device_train_batch_size 1 \
--num_train_epochs 3 \
--learning_rate 0.0001 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--lora_dropout 0.05 \
--weight_decay 0.0 \
--seed 0
```
|
TheBloke/Chronoboros-33B-GGML
|
TheBloke
| 2023-07-10T13:16:31Z | 0 | 11 | null |
[
"license:other",
"region:us"
] | null | 2023-07-10T08:29:30Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Henk717's Chronoboros 33B GGML
These files are GGML format model files for [Henk717's Chronoboros 33B](https://huggingface.co/Henk717/chronoboros-33B).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with GPU acceleration via the c_transformers backend.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI. Supports full GPU accel on macOS. Also supports Windows, without GPU accel.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Requires extra steps to enable GPU accel via llama.cpp backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Chronoboros-33B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Chronoboros-33B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Henk717/chronoboros-33B)
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: {prompt}
### Response:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| chronoboros-33b.ggmlv3.q2_K.bin | q2_K | 2 | 13.71 GB| 16.21 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| chronoboros-33b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 14.06 GB| 16.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| chronoboros-33b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.72 GB| 18.22 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| chronoboros-33b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.28 GB| 19.78 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| chronoboros-33b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.36 GB| 20.86 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| chronoboros-33b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.62 GB| 22.12 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| chronoboros-33b.ggmlv3.q4_0.bin | q4_0 | 4 | 18.30 GB| 20.80 GB | Original quant method, 4-bit. |
| chronoboros-33b.ggmlv3.q4_1.bin | q4_1 | 4 | 20.33 GB| 22.83 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| chronoboros-33b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.40 GB| 24.90 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| chronoboros-33b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.05 GB| 25.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| chronoboros-33b.ggmlv3.q5_0.bin | q5_0 | 5 | 22.37 GB| 24.87 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| chronoboros-33b.ggmlv3.q5_1.bin | q5_1 | 5 | 24.40 GB| 26.90 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| chronoboros-33b.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB| 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| chronoboros-33b.ggmlv3.q8_0.bin | q8_0 | 8 | 34.56 GB| 37.06 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m chronoboros-33b.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Henk717's Chronoboros 33B
This model was the result of a 50/50 average weight merge between Airoboros-33B-1.4 and Chronos-33B.
License is inhereted from all merged models, which includes the LLaMA license requiring you to own a license to use the LLaMA models.
If you have such a license grant from Facebook you can request access to this model.
|
Knudo/distilbert-base-uncased-finetuned-cola
|
Knudo
| 2023-07-10T13:13:46Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-10T13:09:20Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Knudo/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Knudo/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1842
- Validation Loss: 0.5764
- Train Matthews Correlation: 0.5185
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5106 | 0.4629 | 0.4797 | 0 |
| 0.3111 | 0.4999 | 0.4957 | 1 |
| 0.1842 | 0.5764 | 0.5185 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jordyvl/vit-tiny_tobacco3482_kd_MSE_test_pretrain_student
|
jordyvl
| 2023-07-10T13:01:47Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T12:59:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: vit-tiny_tobacco3482_kd_MSE_test_pretrain_student
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-tiny_tobacco3482_kd_MSE_test_pretrain_student
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 0.6243 | 0.595 | 0.6456 | 1.9017 | 0.595 | 0.5113 | 0.3512 | 0.2202 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.12.0
- Tokenizers 0.12.1
|
datenmassiv/falcon-7b-instruct
|
datenmassiv
| 2023-07-10T13:00:34Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"coreml",
"RefinedWebModel",
"text-generation",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2306.01116",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T13:00:33Z |
---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: true
widget:
- text: Hey Falcon! Any recommendations for my holidays in Abu Dhabi?
example_title: Abu Dhabi Trip
- text: What's the Everett interpretation of quantum mechanics?
example_title: 'Q/A: Quantum & Answers'
- text: >-
Give me a list of the top 10 dive sites you would recommend around the
world.
example_title: Diving Top 10
- text: Can you tell me more about deep-water soloing?
example_title: Extreme sports
- text: >-
Can you write a short tweet about the Apache 2.0 release of our latest AI
model, Falcon LLM?
example_title: Twitter Helper
- text: What are the responsabilities of a Chief Llama Officer?
example_title: Trendy Jobs
license: apache-2.0
duplicated_from: tiiuae/falcon-7b-instruct
---
# ✨ Falcon-7B-Instruct
**Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.**
*Paper coming soon 😊.*
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
## Why use Falcon-7B-Instruct?
* **You are looking for a ready-to-use chat/instruct model based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).**
* **Falcon-7B is a strong base model, outperforming comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
💬 **This is an instruct model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
🔥 **Looking for an even more powerful model?** [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) is Falcon-7B-Instruct's big brother!
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B-Instruct.
# Model Card for Falcon-7B-Instruct
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English and French;
- **License:** Apache 2.0;
- **Finetuned from model:** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Falcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets.
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets.
| **Data source** | **Fraction** | **Tokens** | **Description** |
|--------------------|--------------|------------|-----------------------------------|
| [Bai ze](https://github.com/project-baize/baize-chatbot) | 65% | 164M | chat |
| [GPT4All](https://github.com/nomic-ai/gpt4all) | 25% | 62M | instruct |
| [GPTeacher](https://github.com/teknium1/GPTeacher) | 5% | 11M | instruct |
| [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 5% | 13M | massive web crawl |
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
## Evaluation
*Paper coming soon.*
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
Note that this model variant is not optimized for NLP benchmarks.
## Technical Specifications
For more information about pretraining, see [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
### Model Architecture and Objective
Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
* **Decoder-block:** parallel attention/MLP with a single layer norm.
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 32 | |
| `d_model` | 4544 | Increased to compensate for multiquery |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.
#### Software
Falcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
*Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
```
@article{falcon40b,
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
```
To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
## License
Falcon-7B-Instruct is made available under the Apache 2.0 license.
## Contact
[email protected]
|
ccattomio/Reinforce-CartPole-v1
|
ccattomio
| 2023-07-10T12:59:48Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T12:59:37Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
1aurent/q-FrozenLake-v1-4x4-noSlippery
|
1aurent
| 2023-07-10T12:58:37Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T12:58:33Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="1aurent/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bonzo1971/sesgo_genero_model
|
bonzo1971
| 2023-07-10T12:46:57Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-09T14:10:18Z |
---
tags:
- generated_from_trainer
model-index:
- name: sesgo_genero_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sesgo_genero_model
This model is a fine-tuned version of [pysentimiento/robertuito-base-uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
GraydientPlatformAPI/model_710du
|
GraydientPlatformAPI
| 2023-07-10T12:30:48Z | 29 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-10T12:10:24Z |
---
license: openrail
library_name: diffusers
pipeline_tag: text-to-image
---
|
Nianhua123/ppo-LunarLander-v2
|
Nianhua123
| 2023-07-10T12:24:32Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T12:24:15Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.69 +/- 15.15
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
PraveenJesu/openai-whisper-medium-peft-lora-v2.2.4
|
PraveenJesu
| 2023-07-10T12:24:27Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-10T12:24:25Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
KaraAgroAI/CADI-AI
|
KaraAgroAI
| 2023-07-10T12:23:56Z | 21 | 3 |
yolo
|
[
"yolo",
"object detection",
"vision",
"object-detection",
"en",
"dataset:KaraAgroAI/CADI-AI",
"license:agpl-3.0",
"region:us"
] |
object-detection
| 2023-05-17T13:54:39Z |
---
license: agpl-3.0
datasets:
- KaraAgroAI/CADI-AI
language:
- en
library_name: yolo
tags:
- object detection
- vision
- yolo
pipeline_tag: object-detection
metrics:
- mape
---
## Cashew Disease Identification with AI (CADI-AI) Model
### Model Description
Object detection model trained using [YOLO v5x](https://github.com/ultralytics/yolov5/releases), a SOTA object detection algorithm.
The model was pre-trained on the Cashew Disease Identification with AI (CADI-AI) train set (3788 images) at a resolution of 640x640 pixels.
The CADI-AI dataset is available via [Kaggle](https://www.kaggle.com/datasets/karaagroaiprojects/cadi-ai) and
[HuggingFace](https://huggingface.co/datasets/KaraAgroAI/CADI-AI).
## Intended uses
You can use the raw model for object detection on cashew images.
The model was initially developed to inform users whether cashew trees suffer from:
- pest infection, i.e. damage to crops by insects or pests
- disease, i.e. attacks on crops by microorganisms
- abiotic stress caused by non-living factors (e.g. environmental factors like weather or soil conditions or the lack of mineral nutrients to the crop).
KaraAgro AI developed the model for the initiatives
[Market-Oriented Value Chains for Jobs & Growth in the ECOWAS Region (MOVE)](https://www.giz.de/en/worldwide/108524.html) and
[FAIR Forward - Artificial Intelligence for All](https://www.bmz-digital.global/en/overview-of-initiatives/fair-forward/).
Both initiatives are implemented by the Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) on behalf of the German Federal Ministry for Economic Cooperation and Development (BMZ).
### How to use
- Load model and perform prediction:
```bash
pip install -U ultralytics
```
```python
import torch
# load model
model = torch.hub.load('ultralytics/yolov5', 'custom', path='CADI-AI/yolov5_0.65map_exp7_best.pt', force_reload=True)
# Images
img = ['/path/to/your/image.jpg']# batch of images
# set model parameters
# set Non-Maximum-Suppression(NMS) threshold to define
# minimum confidence score that a bounding box must have in order to be kept.
model.conf = 0.20 # NMS confidence threshold
# perform inference
results = model(img, size=640)
# Results
results.print()
results.xyxy[0] # img1 predictions (tensor)
results.pandas().xyxy[0] # img1 predictions (pandas)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --data data.yaml --img 640 --batch 16 --weights KaraAgroAI/CADI-AI --epochs 10
```
### Model performance
| Class | Precision | Recall | mAP@50 | mAP@50-95 |
| --- | --- | --- | --- | --- |
| all | 0.663 | 0.632 | 0.648 | 0.291 |
| insect | 0.794 | 0.811 | 0.815 | 0.39 |
| abiotic | 0.682 | 0.514 | 0.542 | 0.237 |
| disease | 0.594 | 0.571 | 0.588 | 0.248 |
### Limitations of the Model
The model has a few limitations that affect its performance in distinguishing between the disease class and the abiotic class.
The primary challenge lies in the similarity between these two classes within a typical farm setting.
The model may encounter difficulties in accurately differentiating between them due to their overlapping characteristics.
This limitation is an inherent challenge in the dataset and can impact the model's accuracy when classifying these classes.
However, it is worth noting that the model exhibits strong performance when it comes to the insect class.
This is attributed to the distinct characteristics of insect class, which make them easier to identify and classify accurately.
### Demo
[CADI-AI Spaces demonstration](https://huggingface.co/spaces/KaraAgroAI/CADI-AI)
### Project Repo
If you want to know how the model and dataset has been used further for the GIZ-funded activity, please have a look at:
- The [GitHub repository](https://github.com/karaagro/cadi-ai) for the CADI AI desktop application
### Example prediction
<div align="center">
<img width="640" alt="KaraAgroAI/CADI-AI" src="https://huggingface.co/KaraAgroAI/CADI-AI/resolve/main/sample.jpg">
</div>
|
Mavila/First_DRL
|
Mavila
| 2023-07-10T12:06:41Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T12:06:19Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.12 +/- 13.54
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jordyvl/dit-small_rvl_cdip_100_examples_per_class_kd_MSE_lr_fix
|
jordyvl
| 2023-07-10T12:04:08Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T11:10:29Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-small_rvl_cdip_100_examples_per_class_kd_MSE_lr_fix
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-small_rvl_cdip_100_examples_per_class_kd_MSE_lr_fix
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8796
- Accuracy: 0.26
- Brier Loss: 0.8768
- Nll: 6.0962
- F1 Micro: 0.26
- F1 Macro: 0.2480
- Ece: 0.2002
- Aurc: 0.5815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 1.5365 | 0.065 | 0.9398 | 10.2864 | 0.065 | 0.0116 | 0.1183 | 0.9536 |
| No log | 2.0 | 14 | 1.5332 | 0.06 | 0.9374 | 9.8468 | 0.06 | 0.0269 | 0.1067 | 0.9096 |
| No log | 3.0 | 21 | 1.5119 | 0.085 | 0.9352 | 9.1495 | 0.085 | 0.0355 | 0.1135 | 0.8759 |
| No log | 4.0 | 28 | 1.5040 | 0.0825 | 0.9333 | 8.6549 | 0.0825 | 0.0439 | 0.1181 | 0.8618 |
| No log | 5.0 | 35 | 1.5021 | 0.1 | 0.9301 | 8.9643 | 0.1000 | 0.0558 | 0.1318 | 0.8030 |
| No log | 6.0 | 42 | 1.4885 | 0.1 | 0.9276 | 7.8684 | 0.1000 | 0.0505 | 0.1205 | 0.8190 |
| No log | 7.0 | 49 | 1.4882 | 0.0975 | 0.9254 | 9.4095 | 0.0975 | 0.0584 | 0.1220 | 0.7847 |
| No log | 8.0 | 56 | 1.4909 | 0.1275 | 0.9227 | 9.4274 | 0.1275 | 0.0827 | 0.1335 | 0.7445 |
| No log | 9.0 | 63 | 1.4837 | 0.115 | 0.9217 | 10.2918 | 0.115 | 0.0546 | 0.1366 | 0.7932 |
| No log | 10.0 | 70 | 1.4857 | 0.1125 | 0.9186 | 9.5039 | 0.1125 | 0.0510 | 0.1277 | 0.7749 |
| No log | 11.0 | 77 | 1.4804 | 0.1125 | 0.9183 | 8.5178 | 0.1125 | 0.0515 | 0.1315 | 0.7831 |
| No log | 12.0 | 84 | 1.4701 | 0.11 | 0.9177 | 8.2398 | 0.11 | 0.0655 | 0.1310 | 0.7754 |
| No log | 13.0 | 91 | 1.4721 | 0.16 | 0.9160 | 7.2379 | 0.16 | 0.1155 | 0.1462 | 0.7370 |
| No log | 14.0 | 98 | 1.4717 | 0.11 | 0.9159 | 8.1355 | 0.11 | 0.0633 | 0.1221 | 0.7579 |
| No log | 15.0 | 105 | 1.4739 | 0.1325 | 0.9138 | 7.4037 | 0.1325 | 0.0790 | 0.1419 | 0.7358 |
| No log | 16.0 | 112 | 1.4657 | 0.1425 | 0.9135 | 7.8063 | 0.1425 | 0.0821 | 0.1285 | 0.7269 |
| No log | 17.0 | 119 | 1.4632 | 0.1375 | 0.9112 | 7.8852 | 0.1375 | 0.0948 | 0.1389 | 0.7342 |
| No log | 18.0 | 126 | 1.4769 | 0.15 | 0.9081 | 8.5375 | 0.15 | 0.0894 | 0.1399 | 0.7113 |
| No log | 19.0 | 133 | 1.4547 | 0.1775 | 0.9045 | 6.4114 | 0.1775 | 0.1174 | 0.1507 | 0.7007 |
| No log | 20.0 | 140 | 1.4470 | 0.1725 | 0.9031 | 8.1696 | 0.1725 | 0.1246 | 0.1464 | 0.7079 |
| No log | 21.0 | 147 | 1.4615 | 0.19 | 0.9021 | 6.0696 | 0.19 | 0.1390 | 0.1646 | 0.7023 |
| No log | 22.0 | 154 | 1.4588 | 0.2 | 0.8996 | 6.0038 | 0.2000 | 0.1384 | 0.1628 | 0.6821 |
| No log | 23.0 | 161 | 1.4646 | 0.1525 | 0.8988 | 7.0678 | 0.1525 | 0.1075 | 0.1458 | 0.7000 |
| No log | 24.0 | 168 | 1.4491 | 0.2125 | 0.8933 | 5.9276 | 0.2125 | 0.1503 | 0.1533 | 0.6457 |
| No log | 25.0 | 175 | 1.4526 | 0.205 | 0.8916 | 7.6108 | 0.205 | 0.1479 | 0.1603 | 0.6676 |
| No log | 26.0 | 182 | 1.4510 | 0.17 | 0.8910 | 5.6337 | 0.17 | 0.1333 | 0.1396 | 0.6868 |
| No log | 27.0 | 189 | 1.4567 | 0.19 | 0.8850 | 5.2038 | 0.19 | 0.1380 | 0.1637 | 0.6547 |
| No log | 28.0 | 196 | 1.4570 | 0.2225 | 0.8846 | 6.5368 | 0.2225 | 0.1840 | 0.1701 | 0.6554 |
| No log | 29.0 | 203 | 1.4701 | 0.2075 | 0.8820 | 5.0057 | 0.2075 | 0.1663 | 0.1719 | 0.6598 |
| No log | 30.0 | 210 | 1.4693 | 0.2225 | 0.8755 | 7.4456 | 0.2225 | 0.1729 | 0.1626 | 0.6355 |
| No log | 31.0 | 217 | 1.4670 | 0.23 | 0.8787 | 5.8938 | 0.23 | 0.1904 | 0.1717 | 0.6424 |
| No log | 32.0 | 224 | 1.4540 | 0.2275 | 0.8756 | 6.6513 | 0.2275 | 0.1673 | 0.1676 | 0.6306 |
| No log | 33.0 | 231 | 1.4641 | 0.2275 | 0.8649 | 5.5689 | 0.2275 | 0.1751 | 0.1746 | 0.6138 |
| No log | 34.0 | 238 | 1.4710 | 0.2425 | 0.8640 | 7.0556 | 0.2425 | 0.1957 | 0.1809 | 0.6048 |
| No log | 35.0 | 245 | 1.4685 | 0.23 | 0.8632 | 5.5735 | 0.23 | 0.1940 | 0.1609 | 0.6188 |
| No log | 36.0 | 252 | 1.4665 | 0.2375 | 0.8592 | 5.8835 | 0.2375 | 0.1952 | 0.1727 | 0.6050 |
| No log | 37.0 | 259 | 1.4668 | 0.235 | 0.8540 | 5.3502 | 0.235 | 0.1966 | 0.1746 | 0.6056 |
| No log | 38.0 | 266 | 1.4855 | 0.27 | 0.8510 | 5.3781 | 0.27 | 0.2124 | 0.1692 | 0.5825 |
| No log | 39.0 | 273 | 1.5279 | 0.265 | 0.8562 | 6.2426 | 0.265 | 0.2126 | 0.1772 | 0.5831 |
| No log | 40.0 | 280 | 1.5433 | 0.2425 | 0.8551 | 5.9574 | 0.2425 | 0.1867 | 0.1499 | 0.5874 |
| No log | 41.0 | 287 | 1.5955 | 0.2525 | 0.8597 | 6.1628 | 0.2525 | 0.2024 | 0.1479 | 0.5891 |
| No log | 42.0 | 294 | 1.5528 | 0.2475 | 0.8541 | 6.3624 | 0.2475 | 0.1908 | 0.1566 | 0.5735 |
| No log | 43.0 | 301 | 1.5858 | 0.2675 | 0.8504 | 6.1261 | 0.2675 | 0.2174 | 0.1706 | 0.5674 |
| No log | 44.0 | 308 | 1.6013 | 0.2725 | 0.8496 | 5.8409 | 0.2725 | 0.2463 | 0.1846 | 0.5807 |
| No log | 45.0 | 315 | 1.5632 | 0.2625 | 0.8472 | 5.9669 | 0.2625 | 0.2307 | 0.1689 | 0.5689 |
| No log | 46.0 | 322 | 1.6520 | 0.2675 | 0.8509 | 5.8544 | 0.2675 | 0.2325 | 0.1779 | 0.5622 |
| No log | 47.0 | 329 | 1.6135 | 0.2625 | 0.8476 | 5.5208 | 0.2625 | 0.2504 | 0.1565 | 0.5759 |
| No log | 48.0 | 336 | 1.6565 | 0.275 | 0.8466 | 5.9254 | 0.275 | 0.2527 | 0.2026 | 0.5616 |
| No log | 49.0 | 343 | 1.6807 | 0.2625 | 0.8531 | 6.1297 | 0.2625 | 0.2259 | 0.1813 | 0.5664 |
| No log | 50.0 | 350 | 1.7266 | 0.255 | 0.8560 | 6.0828 | 0.255 | 0.2315 | 0.1817 | 0.5735 |
| No log | 51.0 | 357 | 1.7038 | 0.2525 | 0.8579 | 5.6442 | 0.2525 | 0.2405 | 0.1861 | 0.5828 |
| No log | 52.0 | 364 | 1.7954 | 0.255 | 0.8583 | 5.7016 | 0.255 | 0.2227 | 0.1722 | 0.5725 |
| No log | 53.0 | 371 | 1.7567 | 0.275 | 0.8557 | 6.1586 | 0.275 | 0.2523 | 0.1577 | 0.5619 |
| No log | 54.0 | 378 | 1.7589 | 0.2525 | 0.8565 | 5.3969 | 0.2525 | 0.2325 | 0.1840 | 0.5661 |
| No log | 55.0 | 385 | 1.7778 | 0.265 | 0.8569 | 5.8559 | 0.265 | 0.2447 | 0.1835 | 0.5640 |
| No log | 56.0 | 392 | 1.8044 | 0.275 | 0.8592 | 5.9942 | 0.275 | 0.2517 | 0.1783 | 0.5627 |
| No log | 57.0 | 399 | 1.8327 | 0.2625 | 0.8628 | 6.0224 | 0.2625 | 0.2333 | 0.1801 | 0.5560 |
| No log | 58.0 | 406 | 1.8184 | 0.25 | 0.8609 | 6.0769 | 0.25 | 0.2333 | 0.1941 | 0.5718 |
| No log | 59.0 | 413 | 1.8318 | 0.2575 | 0.8639 | 5.9454 | 0.2575 | 0.2364 | 0.1965 | 0.5743 |
| No log | 60.0 | 420 | 1.8081 | 0.2525 | 0.8641 | 6.0119 | 0.2525 | 0.2380 | 0.1818 | 0.5755 |
| No log | 61.0 | 427 | 1.8405 | 0.2625 | 0.8775 | 6.2129 | 0.2625 | 0.2474 | 0.1767 | 0.5908 |
| No log | 62.0 | 434 | 1.9012 | 0.2625 | 0.8728 | 6.1015 | 0.2625 | 0.2373 | 0.1881 | 0.5716 |
| No log | 63.0 | 441 | 1.8500 | 0.26 | 0.8728 | 6.3885 | 0.26 | 0.2414 | 0.1933 | 0.5809 |
| No log | 64.0 | 448 | 1.8771 | 0.2675 | 0.8733 | 6.2730 | 0.2675 | 0.2553 | 0.2035 | 0.5800 |
| No log | 65.0 | 455 | 1.8744 | 0.2575 | 0.8677 | 5.9805 | 0.2575 | 0.2392 | 0.1918 | 0.5663 |
| No log | 66.0 | 462 | 1.8366 | 0.255 | 0.8694 | 6.0073 | 0.255 | 0.2403 | 0.2048 | 0.5807 |
| No log | 67.0 | 469 | 1.8758 | 0.2575 | 0.8743 | 6.1015 | 0.2575 | 0.2381 | 0.2071 | 0.5825 |
| No log | 68.0 | 476 | 1.8796 | 0.2675 | 0.8711 | 5.9457 | 0.2675 | 0.2470 | 0.2100 | 0.5737 |
| No log | 69.0 | 483 | 1.8635 | 0.2675 | 0.8721 | 5.9312 | 0.2675 | 0.2493 | 0.1788 | 0.5751 |
| No log | 70.0 | 490 | 1.8801 | 0.2625 | 0.8710 | 5.9629 | 0.2625 | 0.2467 | 0.1974 | 0.5721 |
| No log | 71.0 | 497 | 1.8936 | 0.26 | 0.8791 | 6.0358 | 0.26 | 0.2481 | 0.1922 | 0.5844 |
| 0.9216 | 72.0 | 504 | 1.8736 | 0.275 | 0.8715 | 6.0493 | 0.275 | 0.2569 | 0.2099 | 0.5710 |
| 0.9216 | 73.0 | 511 | 1.8784 | 0.2525 | 0.8760 | 6.1441 | 0.2525 | 0.2401 | 0.1978 | 0.5849 |
| 0.9216 | 74.0 | 518 | 1.8843 | 0.2725 | 0.8763 | 6.1948 | 0.2725 | 0.2533 | 0.2007 | 0.5801 |
| 0.9216 | 75.0 | 525 | 1.8785 | 0.2675 | 0.8784 | 5.9868 | 0.2675 | 0.2578 | 0.1975 | 0.5851 |
| 0.9216 | 76.0 | 532 | 1.8812 | 0.275 | 0.8725 | 5.9367 | 0.275 | 0.2594 | 0.2037 | 0.5744 |
| 0.9216 | 77.0 | 539 | 1.8956 | 0.27 | 0.8746 | 5.9038 | 0.27 | 0.2541 | 0.1816 | 0.5738 |
| 0.9216 | 78.0 | 546 | 1.8897 | 0.265 | 0.8802 | 5.9763 | 0.265 | 0.2493 | 0.2098 | 0.5866 |
| 0.9216 | 79.0 | 553 | 1.8728 | 0.275 | 0.8752 | 6.0806 | 0.275 | 0.2623 | 0.1874 | 0.5794 |
| 0.9216 | 80.0 | 560 | 1.8887 | 0.2725 | 0.8759 | 6.2762 | 0.2725 | 0.2520 | 0.2005 | 0.5768 |
| 0.9216 | 81.0 | 567 | 1.8987 | 0.2725 | 0.8787 | 6.2444 | 0.2725 | 0.2587 | 0.2183 | 0.5773 |
| 0.9216 | 82.0 | 574 | 1.8759 | 0.2625 | 0.8773 | 6.1643 | 0.2625 | 0.2541 | 0.1922 | 0.5805 |
| 0.9216 | 83.0 | 581 | 1.8766 | 0.27 | 0.8748 | 6.0036 | 0.27 | 0.2554 | 0.1784 | 0.5762 |
| 0.9216 | 84.0 | 588 | 1.8809 | 0.2625 | 0.8764 | 6.0488 | 0.2625 | 0.2469 | 0.2030 | 0.5833 |
| 0.9216 | 85.0 | 595 | 1.8982 | 0.26 | 0.8775 | 6.0747 | 0.26 | 0.2453 | 0.1998 | 0.5851 |
| 0.9216 | 86.0 | 602 | 1.8912 | 0.27 | 0.8798 | 6.1894 | 0.27 | 0.2566 | 0.1938 | 0.5839 |
| 0.9216 | 87.0 | 609 | 1.8847 | 0.2775 | 0.8769 | 6.2744 | 0.2775 | 0.2643 | 0.2019 | 0.5775 |
| 0.9216 | 88.0 | 616 | 1.8734 | 0.265 | 0.8741 | 6.1928 | 0.265 | 0.2526 | 0.1763 | 0.5820 |
| 0.9216 | 89.0 | 623 | 1.8760 | 0.2725 | 0.8768 | 6.0274 | 0.2725 | 0.2620 | 0.2039 | 0.5792 |
| 0.9216 | 90.0 | 630 | 1.8860 | 0.265 | 0.8771 | 6.0912 | 0.265 | 0.2518 | 0.1924 | 0.5810 |
| 0.9216 | 91.0 | 637 | 1.8865 | 0.2625 | 0.8750 | 6.2350 | 0.2625 | 0.2476 | 0.1844 | 0.5791 |
| 0.9216 | 92.0 | 644 | 1.8815 | 0.2725 | 0.8733 | 6.0962 | 0.2725 | 0.2563 | 0.2013 | 0.5721 |
| 0.9216 | 93.0 | 651 | 1.8794 | 0.27 | 0.8756 | 6.2535 | 0.27 | 0.2562 | 0.2028 | 0.5764 |
| 0.9216 | 94.0 | 658 | 1.8835 | 0.2675 | 0.8769 | 6.2039 | 0.2675 | 0.2562 | 0.1928 | 0.5773 |
| 0.9216 | 95.0 | 665 | 1.8904 | 0.27 | 0.8786 | 6.1504 | 0.27 | 0.2543 | 0.2034 | 0.5768 |
| 0.9216 | 96.0 | 672 | 1.8911 | 0.26 | 0.8788 | 6.1527 | 0.26 | 0.2465 | 0.2025 | 0.5829 |
| 0.9216 | 97.0 | 679 | 1.8871 | 0.265 | 0.8776 | 6.0994 | 0.265 | 0.2519 | 0.2126 | 0.5794 |
| 0.9216 | 98.0 | 686 | 1.8825 | 0.265 | 0.8769 | 6.1564 | 0.265 | 0.2516 | 0.1987 | 0.5776 |
| 0.9216 | 99.0 | 693 | 1.8803 | 0.2675 | 0.8766 | 6.1183 | 0.2675 | 0.2561 | 0.2095 | 0.5798 |
| 0.9216 | 100.0 | 700 | 1.8796 | 0.26 | 0.8768 | 6.0962 | 0.26 | 0.2480 | 0.2002 | 0.5815 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
HeinrichWirth/taxi
|
HeinrichWirth
| 2023-07-10T11:44:14Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T11:43:46Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="HeinrichWirth/taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tyavika/LR1E4-BS16-Distil-CNN512LSTM256NoBi
|
tyavika
| 2023-07-10T11:29:13Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-09T20:24:08Z |
---
tags:
- generated_from_trainer
model-index:
- name: LR1E4-BS16-Distil-CNN512LSTM256NoBi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LR1E4-BS16-Distil-CNN512LSTM256NoBi
This model is a fine-tuned version of [tyavika/LR1E4-BS16-Distil-CNN512LSTM256NoBi](https://huggingface.co/tyavika/LR1E4-BS16-Distil-CNN512LSTM256NoBi) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Junr-syl/tweet_sentiments_analysis
|
Junr-syl
| 2023-07-10T11:21:39Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-07T09:19:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: tweet_sentiments_analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tweet_sentiments_analysis
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3953
- eval_accuracy: 0.8660
- eval_runtime: 254.1512
- eval_samples_per_second: 31.473
- eval_steps_per_second: 3.935
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jordyvl/dit-base_tobacco_small_student
|
jordyvl
| 2023-07-10T10:58:25Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T10:07:51Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-base_tobacco_small_student
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base_tobacco_small_student
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3305
- Accuracy: 0.435
- Brier Loss: 1.0472
- Nll: 10.3327
- F1 Micro: 0.435
- F1 Macro: 0.4299
- Ece: 0.5115
- Aurc: 0.4245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 50 | 2.1780 | 0.16 | 0.8745 | 11.2696 | 0.16 | 0.0323 | 0.2326 | 0.8208 |
| No log | 2.0 | 100 | 2.1761 | 0.19 | 0.8727 | 10.5065 | 0.19 | 0.0548 | 0.2712 | 0.7980 |
| No log | 3.0 | 150 | 2.1426 | 0.16 | 0.8689 | 8.8915 | 0.16 | 0.0451 | 0.2697 | 0.6322 |
| No log | 4.0 | 200 | 2.0668 | 0.225 | 0.8434 | 9.6036 | 0.225 | 0.1219 | 0.2680 | 0.6623 |
| No log | 5.0 | 250 | 2.0633 | 0.21 | 0.8447 | 5.7679 | 0.2100 | 0.1401 | 0.2733 | 0.5765 |
| No log | 6.0 | 300 | 2.0030 | 0.22 | 0.8351 | 7.1501 | 0.22 | 0.1132 | 0.3000 | 0.6750 |
| No log | 7.0 | 350 | 1.9273 | 0.32 | 0.8243 | 6.2911 | 0.32 | 0.2612 | 0.2822 | 0.6549 |
| No log | 8.0 | 400 | 1.7954 | 0.365 | 0.7742 | 4.2641 | 0.3650 | 0.2647 | 0.2630 | 0.5031 |
| No log | 9.0 | 450 | 1.8070 | 0.36 | 0.7720 | 4.9274 | 0.36 | 0.2914 | 0.2601 | 0.4871 |
| 1.9795 | 10.0 | 500 | 1.7838 | 0.34 | 0.7857 | 3.3860 | 0.34 | 0.2387 | 0.2902 | 0.5057 |
| 1.9795 | 11.0 | 550 | 1.7214 | 0.395 | 0.7404 | 4.1630 | 0.395 | 0.2995 | 0.2922 | 0.4210 |
| 1.9795 | 12.0 | 600 | 1.6834 | 0.445 | 0.7284 | 3.7081 | 0.445 | 0.3444 | 0.2700 | 0.3914 |
| 1.9795 | 13.0 | 650 | 1.6992 | 0.38 | 0.7641 | 4.1246 | 0.38 | 0.3045 | 0.3375 | 0.4155 |
| 1.9795 | 14.0 | 700 | 1.8695 | 0.395 | 0.7711 | 5.6899 | 0.395 | 0.3432 | 0.3224 | 0.4425 |
| 1.9795 | 15.0 | 750 | 1.8757 | 0.38 | 0.7939 | 5.1099 | 0.38 | 0.3879 | 0.3102 | 0.4313 |
| 1.9795 | 16.0 | 800 | 2.0457 | 0.405 | 0.8184 | 5.6034 | 0.405 | 0.3957 | 0.3256 | 0.4414 |
| 1.9795 | 17.0 | 850 | 2.2243 | 0.395 | 0.8653 | 7.7124 | 0.395 | 0.3567 | 0.3887 | 0.3997 |
| 1.9795 | 18.0 | 900 | 1.9309 | 0.45 | 0.7794 | 5.2698 | 0.45 | 0.3763 | 0.3626 | 0.3767 |
| 1.9795 | 19.0 | 950 | 2.2285 | 0.415 | 0.8319 | 6.7127 | 0.415 | 0.4153 | 0.3667 | 0.3942 |
| 0.6717 | 20.0 | 1000 | 2.3745 | 0.445 | 0.8643 | 7.4432 | 0.445 | 0.4290 | 0.3859 | 0.4046 |
| 0.6717 | 21.0 | 1050 | 2.5389 | 0.41 | 0.9148 | 7.6865 | 0.41 | 0.3994 | 0.4351 | 0.4054 |
| 0.6717 | 22.0 | 1100 | 2.5537 | 0.465 | 0.8500 | 8.1266 | 0.465 | 0.4623 | 0.4070 | 0.3900 |
| 0.6717 | 23.0 | 1150 | 2.8355 | 0.42 | 0.9426 | 8.8542 | 0.4200 | 0.3930 | 0.4508 | 0.4201 |
| 0.6717 | 24.0 | 1200 | 2.8575 | 0.4 | 0.9962 | 7.6428 | 0.4000 | 0.3502 | 0.4994 | 0.4119 |
| 0.6717 | 25.0 | 1250 | 2.8704 | 0.445 | 0.9418 | 9.2600 | 0.445 | 0.4570 | 0.4309 | 0.4021 |
| 0.6717 | 26.0 | 1300 | 3.4702 | 0.43 | 0.9641 | 12.1621 | 0.4300 | 0.3977 | 0.4590 | 0.3597 |
| 0.6717 | 27.0 | 1350 | 3.1484 | 0.475 | 0.9518 | 8.1474 | 0.4750 | 0.4641 | 0.4809 | 0.4088 |
| 0.6717 | 28.0 | 1400 | 3.2299 | 0.455 | 0.9673 | 9.6161 | 0.455 | 0.4205 | 0.4711 | 0.3806 |
| 0.6717 | 29.0 | 1450 | 3.4968 | 0.425 | 1.0136 | 10.5614 | 0.425 | 0.3992 | 0.4743 | 0.3773 |
| 0.0268 | 30.0 | 1500 | 3.1340 | 0.46 | 0.9443 | 8.5023 | 0.46 | 0.4296 | 0.4557 | 0.3735 |
| 0.0268 | 31.0 | 1550 | 3.4297 | 0.435 | 1.0058 | 8.2428 | 0.435 | 0.3979 | 0.4967 | 0.3848 |
| 0.0268 | 32.0 | 1600 | 3.6922 | 0.4 | 1.0488 | 10.8019 | 0.4000 | 0.3880 | 0.5223 | 0.4017 |
| 0.0268 | 33.0 | 1650 | 3.6009 | 0.445 | 0.9964 | 10.1007 | 0.445 | 0.4204 | 0.4924 | 0.3981 |
| 0.0268 | 34.0 | 1700 | 3.6678 | 0.425 | 1.0494 | 9.1369 | 0.425 | 0.3896 | 0.5159 | 0.4192 |
| 0.0268 | 35.0 | 1750 | 3.5743 | 0.45 | 0.9953 | 9.5996 | 0.45 | 0.4182 | 0.4934 | 0.4030 |
| 0.0268 | 36.0 | 1800 | 3.5551 | 0.465 | 0.9877 | 9.6080 | 0.465 | 0.4221 | 0.5033 | 0.3977 |
| 0.0268 | 37.0 | 1850 | 3.7424 | 0.435 | 1.0191 | 9.9258 | 0.435 | 0.4292 | 0.4955 | 0.4120 |
| 0.0268 | 38.0 | 1900 | 3.7093 | 0.45 | 1.0051 | 9.7038 | 0.45 | 0.4033 | 0.4966 | 0.3857 |
| 0.0268 | 39.0 | 1950 | 3.7240 | 0.45 | 1.0076 | 9.8462 | 0.45 | 0.4027 | 0.4953 | 0.3962 |
| 0.0022 | 40.0 | 2000 | 3.7503 | 0.455 | 1.0090 | 9.9967 | 0.455 | 0.4076 | 0.5056 | 0.3968 |
| 0.0022 | 41.0 | 2050 | 3.5545 | 0.44 | 1.0007 | 8.7616 | 0.44 | 0.4285 | 0.4894 | 0.4008 |
| 0.0022 | 42.0 | 2100 | 3.7452 | 0.435 | 1.0142 | 9.4376 | 0.435 | 0.4135 | 0.5032 | 0.4022 |
| 0.0022 | 43.0 | 2150 | 3.5980 | 0.47 | 0.9532 | 8.2333 | 0.47 | 0.4441 | 0.4650 | 0.4113 |
| 0.0022 | 44.0 | 2200 | 3.7055 | 0.45 | 0.9946 | 9.0121 | 0.45 | 0.4327 | 0.4817 | 0.3688 |
| 0.0022 | 45.0 | 2250 | 3.8500 | 0.435 | 1.0161 | 9.2035 | 0.435 | 0.4164 | 0.5128 | 0.3723 |
| 0.0022 | 46.0 | 2300 | 3.8806 | 0.435 | 1.0261 | 10.7033 | 0.435 | 0.4323 | 0.5008 | 0.3812 |
| 0.0022 | 47.0 | 2350 | 3.8114 | 0.455 | 1.0128 | 9.6784 | 0.455 | 0.4236 | 0.5025 | 0.3873 |
| 0.0022 | 48.0 | 2400 | 3.8743 | 0.435 | 1.0294 | 8.7193 | 0.435 | 0.3734 | 0.5109 | 0.3783 |
| 0.0022 | 49.0 | 2450 | 3.9281 | 0.43 | 1.0414 | 9.9489 | 0.4300 | 0.4296 | 0.5047 | 0.4049 |
| 0.0047 | 50.0 | 2500 | 3.7824 | 0.45 | 0.9956 | 10.7814 | 0.45 | 0.4467 | 0.4975 | 0.3949 |
| 0.0047 | 51.0 | 2550 | 4.0089 | 0.475 | 0.9668 | 11.9005 | 0.4750 | 0.4253 | 0.4637 | 0.4501 |
| 0.0047 | 52.0 | 2600 | 3.7248 | 0.43 | 0.9909 | 10.6449 | 0.4300 | 0.4064 | 0.4750 | 0.4023 |
| 0.0047 | 53.0 | 2650 | 3.7911 | 0.415 | 1.0491 | 9.1188 | 0.415 | 0.3608 | 0.5130 | 0.4173 |
| 0.0047 | 54.0 | 2700 | 3.6925 | 0.44 | 1.0000 | 8.9655 | 0.44 | 0.3970 | 0.4826 | 0.4168 |
| 0.0047 | 55.0 | 2750 | 3.6214 | 0.46 | 0.9590 | 9.5422 | 0.46 | 0.4440 | 0.4636 | 0.3829 |
| 0.0047 | 56.0 | 2800 | 4.3545 | 0.405 | 1.0811 | 10.6531 | 0.405 | 0.4090 | 0.5439 | 0.4533 |
| 0.0047 | 57.0 | 2850 | 3.6835 | 0.46 | 0.9717 | 8.2408 | 0.46 | 0.4367 | 0.4950 | 0.4118 |
| 0.0047 | 58.0 | 2900 | 4.0080 | 0.465 | 1.0011 | 9.3764 | 0.465 | 0.4579 | 0.4927 | 0.4234 |
| 0.0047 | 59.0 | 2950 | 4.0141 | 0.45 | 1.0014 | 9.7100 | 0.45 | 0.4443 | 0.4987 | 0.4220 |
| 0.0118 | 60.0 | 3000 | 3.7963 | 0.43 | 1.0135 | 9.4040 | 0.4300 | 0.4007 | 0.5007 | 0.3979 |
| 0.0118 | 61.0 | 3050 | 4.0609 | 0.43 | 1.0426 | 9.3533 | 0.4300 | 0.3825 | 0.5266 | 0.4285 |
| 0.0118 | 62.0 | 3100 | 4.0150 | 0.47 | 1.0002 | 9.3307 | 0.47 | 0.4490 | 0.5030 | 0.4052 |
| 0.0118 | 63.0 | 3150 | 3.7982 | 0.47 | 0.9660 | 8.5060 | 0.47 | 0.4581 | 0.4716 | 0.3988 |
| 0.0118 | 64.0 | 3200 | 4.3553 | 0.44 | 1.0428 | 10.3840 | 0.44 | 0.4218 | 0.5163 | 0.4312 |
| 0.0118 | 65.0 | 3250 | 3.7142 | 0.44 | 0.9900 | 8.5049 | 0.44 | 0.4298 | 0.4849 | 0.3735 |
| 0.0118 | 66.0 | 3300 | 3.7411 | 0.47 | 0.9661 | 8.1935 | 0.47 | 0.4497 | 0.4789 | 0.3812 |
| 0.0118 | 67.0 | 3350 | 3.7858 | 0.49 | 0.9574 | 8.8397 | 0.49 | 0.4799 | 0.4616 | 0.3895 |
| 0.0118 | 68.0 | 3400 | 3.7927 | 0.495 | 0.9459 | 8.6915 | 0.495 | 0.4870 | 0.4577 | 0.3883 |
| 0.0118 | 69.0 | 3450 | 3.8348 | 0.5 | 0.9454 | 8.8298 | 0.5 | 0.4889 | 0.4715 | 0.3891 |
| 0.0004 | 70.0 | 3500 | 3.8551 | 0.48 | 0.9500 | 8.9827 | 0.48 | 0.4755 | 0.4691 | 0.3913 |
| 0.0004 | 71.0 | 3550 | 3.8432 | 0.48 | 0.9622 | 9.1404 | 0.48 | 0.4691 | 0.4690 | 0.3885 |
| 0.0004 | 72.0 | 3600 | 3.8594 | 0.48 | 0.9617 | 8.8182 | 0.48 | 0.4691 | 0.4805 | 0.3854 |
| 0.0004 | 73.0 | 3650 | 3.8855 | 0.485 | 0.9622 | 8.8248 | 0.485 | 0.4760 | 0.4809 | 0.3881 |
| 0.0004 | 74.0 | 3700 | 3.8996 | 0.49 | 0.9610 | 8.9750 | 0.49 | 0.4818 | 0.4634 | 0.3892 |
| 0.0004 | 75.0 | 3750 | 3.9921 | 0.475 | 0.9642 | 9.5409 | 0.4750 | 0.4597 | 0.4666 | 0.4185 |
| 0.0004 | 76.0 | 3800 | 4.1128 | 0.43 | 1.0429 | 9.9966 | 0.4300 | 0.3844 | 0.5187 | 0.4056 |
| 0.0004 | 77.0 | 3850 | 4.0783 | 0.44 | 1.0172 | 9.3016 | 0.44 | 0.4205 | 0.5051 | 0.3988 |
| 0.0004 | 78.0 | 3900 | 4.0804 | 0.445 | 1.0254 | 8.9753 | 0.445 | 0.4246 | 0.5089 | 0.3982 |
| 0.0004 | 79.0 | 3950 | 4.0892 | 0.445 | 1.0269 | 8.8290 | 0.445 | 0.4246 | 0.5069 | 0.4000 |
| 0.0002 | 80.0 | 4000 | 4.1013 | 0.445 | 1.0258 | 9.1363 | 0.445 | 0.4246 | 0.5129 | 0.4033 |
| 0.0002 | 81.0 | 4050 | 4.0985 | 0.44 | 1.0287 | 9.1459 | 0.44 | 0.4213 | 0.5074 | 0.4054 |
| 0.0002 | 82.0 | 4100 | 4.1029 | 0.44 | 1.0263 | 9.3107 | 0.44 | 0.4211 | 0.5125 | 0.4066 |
| 0.0002 | 83.0 | 4150 | 4.1075 | 0.44 | 1.0248 | 9.4604 | 0.44 | 0.4224 | 0.5164 | 0.4061 |
| 0.0002 | 84.0 | 4200 | 4.1087 | 0.44 | 1.0225 | 9.7739 | 0.44 | 0.4221 | 0.5090 | 0.4055 |
| 0.0002 | 85.0 | 4250 | 4.1248 | 0.44 | 1.0262 | 9.7747 | 0.44 | 0.4259 | 0.5032 | 0.4065 |
| 0.0002 | 86.0 | 4300 | 4.1527 | 0.445 | 1.0263 | 9.4647 | 0.445 | 0.4299 | 0.5128 | 0.4066 |
| 0.0002 | 87.0 | 4350 | 4.0529 | 0.475 | 0.9810 | 9.1439 | 0.4750 | 0.4488 | 0.4910 | 0.3938 |
| 0.0002 | 88.0 | 4400 | 4.1405 | 0.455 | 1.0091 | 9.5149 | 0.455 | 0.4230 | 0.4966 | 0.4147 |
| 0.0002 | 89.0 | 4450 | 4.3483 | 0.41 | 1.0724 | 9.8421 | 0.41 | 0.4083 | 0.5384 | 0.4090 |
| 0.0008 | 90.0 | 4500 | 4.5574 | 0.39 | 1.1077 | 11.2517 | 0.39 | 0.3940 | 0.5618 | 0.4405 |
| 0.0008 | 91.0 | 4550 | 4.5104 | 0.41 | 1.0890 | 10.8687 | 0.41 | 0.4173 | 0.5411 | 0.4350 |
| 0.0008 | 92.0 | 4600 | 4.3791 | 0.425 | 1.0672 | 10.7198 | 0.425 | 0.4202 | 0.5233 | 0.4306 |
| 0.0008 | 93.0 | 4650 | 4.3608 | 0.43 | 1.0553 | 10.8428 | 0.4300 | 0.4236 | 0.5196 | 0.4284 |
| 0.0008 | 94.0 | 4700 | 4.3469 | 0.44 | 1.0474 | 10.6774 | 0.44 | 0.4428 | 0.5020 | 0.4280 |
| 0.0008 | 95.0 | 4750 | 4.3420 | 0.44 | 1.0487 | 10.5138 | 0.44 | 0.4385 | 0.5260 | 0.4270 |
| 0.0008 | 96.0 | 4800 | 4.3385 | 0.435 | 1.0491 | 10.3448 | 0.435 | 0.4312 | 0.5170 | 0.4266 |
| 0.0008 | 97.0 | 4850 | 4.3341 | 0.435 | 1.0485 | 10.3378 | 0.435 | 0.4312 | 0.5136 | 0.4261 |
| 0.0008 | 98.0 | 4900 | 4.3336 | 0.435 | 1.0480 | 10.3350 | 0.435 | 0.4312 | 0.5184 | 0.4253 |
| 0.0008 | 99.0 | 4950 | 4.3306 | 0.435 | 1.0472 | 10.3328 | 0.435 | 0.4299 | 0.5116 | 0.4245 |
| 0.0001 | 100.0 | 5000 | 4.3305 | 0.435 | 1.0472 | 10.3327 | 0.435 | 0.4299 | 0.5115 | 0.4245 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.12.0
- Tokenizers 0.12.1
|
km0228kr/xlm-roberta-base-finetuned-panx-de
|
km0228kr
| 2023-07-10T10:57:14Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T10:47:06Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8653353814644136
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1339
- F1: 0.8653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2583 | 1.0 | 525 | 0.1596 | 0.8231 |
| 0.1262 | 2.0 | 1050 | 0.1395 | 0.8468 |
| 0.0824 | 3.0 | 1575 | 0.1339 | 0.8653 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Apocalypse-19/trocr-MICR
|
Apocalypse-19
| 2023-07-10T10:45:25Z | 73 | 1 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"Image-to-Text",
"trocr",
"en",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-07-07T15:35:59Z |
---
language:
- en
tags:
- Image-to-Text
- trocr
---
## TrOCR-MICR
An OCR model for transcribing e13b MICR codes. Finetuned from Microsoft's [TrOCR-large-stage1](https://huggingface.co/microsoft/trocr-large-printed).
Model was finetuned using the e13b portion of the MICR dataset given [here](https://github.com/DoubangoTelecom/tesseractMICR)
|
FelixChao/medical_faq_gpt_vicuna7b_chinese
|
FelixChao
| 2023-07-10T10:32:00Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-10T10:31:43Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
crisU8/bert-finetuned-ner-clinical-plncmm-large-25
|
crisU8
| 2023-07-10T10:18:28Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T09:52:07Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-clinical-plncmm-large-25
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-clinical-plncmm-large-25
This model is a fine-tuned version of [plncmm/beto-clinical-wl-es](https://huggingface.co/plncmm/beto-clinical-wl-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2487
- Precision: 0.7372
- Recall: 0.8035
- F1: 0.7689
- Accuracy: 0.9270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 18
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 446 | 0.2607 | 0.6701 | 0.7772 | 0.7197 | 0.9113 |
| 0.6128 | 2.0 | 892 | 0.2298 | 0.7266 | 0.7964 | 0.7599 | 0.9254 |
| 0.1927 | 3.0 | 1338 | 0.2487 | 0.7372 | 0.8035 | 0.7689 | 0.9270 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jordyvl/dit-tiny_rvl_cdip_100_examples_per_class_kd_MSE_lr_fix
|
jordyvl
| 2023-07-10T10:05:07Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T09:48:49Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-tiny_rvl_cdip_100_examples_per_class_kd_MSE_lr_fix
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-tiny_rvl_cdip_100_examples_per_class_kd_MSE_lr_fix
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4358
- Accuracy: 0.195
- Brier Loss: 0.9035
- Nll: 12.0550
- F1 Micro: 0.195
- F1 Macro: 0.1471
- Ece: 0.1675
- Aurc: 0.6988
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 1.5167 | 0.07 | 0.9368 | 20.8948 | 0.07 | 0.0305 | 0.1106 | 0.8850 |
| No log | 2.0 | 50 | 1.5246 | 0.08 | 0.9362 | 21.4368 | 0.08 | 0.0346 | 0.1200 | 0.8659 |
| No log | 3.0 | 75 | 1.5053 | 0.1 | 0.9340 | 23.7241 | 0.1000 | 0.0522 | 0.1280 | 0.8087 |
| No log | 4.0 | 100 | 1.5097 | 0.0975 | 0.9322 | 17.3004 | 0.0975 | 0.0487 | 0.1220 | 0.8220 |
| No log | 5.0 | 125 | 1.4926 | 0.12 | 0.9296 | 16.3893 | 0.12 | 0.0600 | 0.1284 | 0.7752 |
| No log | 6.0 | 150 | 1.4838 | 0.105 | 0.9273 | 19.3692 | 0.1050 | 0.0356 | 0.1254 | 0.7955 |
| No log | 7.0 | 175 | 1.4729 | 0.0975 | 0.9229 | 18.6899 | 0.0975 | 0.0411 | 0.1134 | 0.7963 |
| No log | 8.0 | 200 | 1.4754 | 0.125 | 0.9196 | 17.7842 | 0.125 | 0.0676 | 0.1238 | 0.7778 |
| No log | 9.0 | 225 | 1.4725 | 0.1125 | 0.9193 | 16.6572 | 0.1125 | 0.0505 | 0.1254 | 0.7839 |
| No log | 10.0 | 250 | 1.4702 | 0.1175 | 0.9168 | 16.3975 | 0.1175 | 0.0556 | 0.1183 | 0.7638 |
| No log | 11.0 | 275 | 1.4648 | 0.1175 | 0.9169 | 18.4274 | 0.1175 | 0.0558 | 0.1219 | 0.7806 |
| No log | 12.0 | 300 | 1.4660 | 0.155 | 0.9166 | 15.6492 | 0.155 | 0.0791 | 0.1411 | 0.7512 |
| No log | 13.0 | 325 | 1.4684 | 0.16 | 0.9164 | 17.1698 | 0.16 | 0.1140 | 0.1519 | 0.7285 |
| No log | 14.0 | 350 | 1.4662 | 0.1175 | 0.9158 | 17.6999 | 0.1175 | 0.0501 | 0.1269 | 0.7637 |
| No log | 15.0 | 375 | 1.4602 | 0.1675 | 0.9143 | 13.2540 | 0.1675 | 0.1153 | 0.1515 | 0.7223 |
| No log | 16.0 | 400 | 1.4556 | 0.1325 | 0.9138 | 13.3868 | 0.1325 | 0.0881 | 0.1323 | 0.7558 |
| No log | 17.0 | 425 | 1.4527 | 0.175 | 0.9128 | 11.1983 | 0.175 | 0.1334 | 0.1596 | 0.7153 |
| No log | 18.0 | 450 | 1.4535 | 0.1625 | 0.9111 | 17.6046 | 0.1625 | 0.1021 | 0.1435 | 0.7379 |
| No log | 19.0 | 475 | 1.4453 | 0.1825 | 0.9086 | 11.8948 | 0.1825 | 0.1228 | 0.1594 | 0.7098 |
| 1.4614 | 20.0 | 500 | 1.4431 | 0.1525 | 0.9078 | 14.2631 | 0.1525 | 0.1115 | 0.1410 | 0.7293 |
| 1.4614 | 21.0 | 525 | 1.4392 | 0.1825 | 0.9063 | 10.7664 | 0.1825 | 0.1378 | 0.1567 | 0.7058 |
| 1.4614 | 22.0 | 550 | 1.4469 | 0.1775 | 0.9055 | 13.4724 | 0.1775 | 0.1212 | 0.1483 | 0.7107 |
| 1.4614 | 23.0 | 575 | 1.4356 | 0.17 | 0.9039 | 11.8141 | 0.17 | 0.1232 | 0.1515 | 0.7091 |
| 1.4614 | 24.0 | 600 | 1.4370 | 0.1875 | 0.9039 | 12.9338 | 0.1875 | 0.1384 | 0.1539 | 0.7017 |
| 1.4614 | 25.0 | 625 | 1.4358 | 0.195 | 0.9035 | 12.0550 | 0.195 | 0.1471 | 0.1675 | 0.6988 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.12.0
- Tokenizers 0.12.1
|
ducnapa/cute-cartoon-illustration
|
ducnapa
| 2023-07-10T09:59:49Z | 29 | 5 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-23T10:00:04Z |
---
library_name: diffusers
pipeline_tag: text-to-image
---
|
justinpinkney/falcon-7b
|
justinpinkney
| 2023-07-10T09:49:02Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"RefinedWebModel",
"text-generation",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2101.00027",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2306.01116",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-07T14:25:17Z |
---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: false
license: apache-2.0
duplicated_from: tiiuae/falcon-7b
---
# 🚀 Falcon-7B
**Falcon-7B is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. It is made available under the Apache 2.0 license.**
*Paper coming soon* 😊.
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
## Why use Falcon-7B?
* **It outperforms comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
* **It is made available under a permissive Apache 2.0 license allowing for commercial use**, without any royalties or restrictions.
⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct).
🔥 **Looking for an even more powerful model?** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) is Falcon-7B's big brother!
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B.
# Model Card for Falcon-7B
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English and French;
- **License:** Apache 2.0.
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon-7B is trained on English and French data only, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-7B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-7B was trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. Significant components from our curated copora were inspired by The Pile ([Gao et al., 2020](https://arxiv.org/abs/2101.00027)).
| **Data source** | **Fraction** | **Tokens** | **Sources** |
|--------------------|--------------|------------|-----------------------------------|
| [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 79% | 1,185B | massive web crawl |
| Books | 7% | 110B | |
| Conversations | 6% | 85B | Reddit, StackOverflow, HackerNews |
| Code | 3% | 45B | |
| RefinedWeb-French | 3% | 45B | massive web crawl |
| Technical | 2% | 30B | arXiv, PubMed, USPTO, etc. |
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
### Training Procedure
Falcon-7B was trained on 384 A100 40GB GPUs, using a 2D parallelism strategy (PP=2, DP=192) combined with ZeRO.
#### Training Hyperparameters
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|------------|-------------------------------------------|
| Precision | `bfloat16` | |
| Optimizer | AdamW | |
| Learning rate | 6e-4 | 4B tokens warm-up, cosine decay to 1.2e-5 |
| Weight decay | 1e-1 | |
| Z-loss | 1e-4 | |
| Batch size | 2304 | 30B tokens ramp-up |
#### Speeds, Sizes, Times
Training happened in early March 2023 and took about two weeks.
## Evaluation
*Paper coming soon*.
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
## Technical Specifications
### Model Architecture and Objective
Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
* **Decoder-block:** parallel attention/MLP with a single layer norm.
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 32 | |
| `d_model` | 4544 | Increased to compensate for multiquery |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-7B was trained on AWS SageMaker, on 384 A100 40GB GPUs in P4d instances.
#### Software
Falcon-7B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
*Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
```
@article{falcon40b,
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
```
To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
## License
Falcon-7B is made available under the Apache 2.0 license.
## Contact
[email protected]
|
TheBloke/GodziLLa-30B-GGML
|
TheBloke
| 2023-07-10T09:38:45Z | 0 | 4 | null |
[
"merge",
"mix",
"cot",
"text-generation",
"license:other",
"region:us"
] |
text-generation
| 2023-07-09T11:53:15Z |
---
inference: false
license: other
pipeline_tag: text-generation
tags:
- merge
- mix
- cot
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Maya Philippine's GodziLLa 30B GGML
These files are GGML format model files for [Maya Philippine's GodziLLa 30B](https://huggingface.co/MayaPH/GodziLLa-30B).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Licensing
This model is GodziLLa-30B, a language model developed by Maya Philippines.
Maya Philippines' work is licensed under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license.
For more information, visit: https://creativecommons.org/licenses/by-nc/4.0/
This model is based on Meta LLaMA weights, which are licensed under a bespoke research-only non-commercial license.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/GodziLLa-30B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/GodziLLa-30B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/MayaPH/GodziLLa-30B)
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: PROMPT
### Response:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| godzilla-30b.ggmlv3.q2_K.bin | q2_K | 2 | 13.71 GB| 16.21 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| godzilla-30b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 14.06 GB| 16.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| godzilla-30b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.72 GB| 18.22 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| godzilla-30b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.28 GB| 19.78 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| godzilla-30b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.36 GB| 20.86 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| godzilla-30b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.62 GB| 22.12 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| godzilla-30b.ggmlv3.q4_0.bin | q4_0 | 4 | 18.30 GB| 20.80 GB | Original quant method, 4-bit. |
| godzilla-30b.ggmlv3.q4_1.bin | q4_1 | 4 | 20.33 GB| 22.83 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| godzilla-30b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.40 GB| 24.90 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| godzilla-30b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.05 GB| 25.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| godzilla-30b.ggmlv3.q5_0.bin | q5_0 | 5 | 22.37 GB| 24.87 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| godzilla-30b.ggmlv3.q5_1.bin | q5_1 | 5 | 24.40 GB| 26.90 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| godzilla-30b.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB| 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| godzilla-30b.ggmlv3.q8_0.bin | q8_0 | 8 | 34.56 GB| 37.06 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m godzilla-30b.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Maya Philippine's GodziLLa 30B
<img src="https://drive.google.com/uc?export=view&id=16DzZwhqybQvT1wQVp-6qXHI9HhKft6CR" width="50%" alt="GodziLLa-30B">
Released July 9, 2023
## Model Description
GodziLLa-30B is an experimental combination of various proprietary Maya LoRAs with CalderaAI's [Lazarus-30B](https://huggingface.co/CalderaAI/30B-Lazarus). This composite model is not meant for any other use outside of research on competing LoRA adapter behavior. More specifically, since this is inherently a LlaMA model, **commercial use is prohibited**. This model's primary purpose is to stress test the limitations of composite LLMs and observe its performance with respect to other LLMs available on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).

## Recommended Prompt Format
Alpaca's instruction is the recommended prompt format, but Vicuna's instruction format may also work.
## Usage
To use GodziLLa-30B, you are required to provide attribution in accordance with the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. Please include the following attribution notice when utilizing GodziLLa-30B in your work:
```python
# This code uses GodziLLa-30B, a language model developed by Maya Philippines.
# The model is licensed under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license.
# For more information, visit: https://creativecommons.org/licenses/by-nc/4.0/
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MayaPH/GodziLLa-30B")
model = AutoModelForCausalLM.from_pretrained("MayaPH/GodziLLa-30B")
```
Please ensure that you include the relevant attribution notice in your code or any other form of usage and restrict your usage to non-commercial use to comply with the license terms.
## Ethical Considerations
When using GodziLLa-30B, it is important to consider the following ethical considerations:
1. **Privacy and Security:** Avoid sharing sensitive personal information while interacting with the model. The model does not have privacy safeguards, so exercise caution when discussing personal or confidential matters.
2. **Fairness and Bias:** The model's responses may reflect biases present in the training data. Be aware of potential biases and make an effort to evaluate responses critically and fairly.
3. **Transparency:** The model operates as a predictive text generator based on patterns learned from the training data. The model's inner workings and the specific training data used are proprietary and not publicly available.
4. **User Responsibility:** Users should take responsibility for their own decisions and not solely rely on the information provided by the model. Consult with the appropriate professionals or reliable sources for specific advice or recommendations.
5. **NSFW Content:** The model is a merge of multiple model checkpoints and LoRA adapters. It is highly likely that the resulting model contains uncensored content that may include, but is not limited to, violence, gore, explicit language, and sexual content. If you plan to further refine this model for safe/aligned usage, you are highly encouraged to implement guardrails along with it.
## Further Information
For additional information or inquiries about GodziLLa-30B, please contact the Maya Philippines iOps Team via [email protected].
## Disclaimer
GodziLLa-30B is an AI language model from Maya Philippines. It is provided "as is" without warranty of any kind, express or implied. The model developers and Maya Philippines shall not be liable for any direct or indirect damages arising from the use of this model.
## Acknowledgments
The development of GodziLLa-30B was made possible by Maya Philippines and the curation of the various proprietary datasets and creation of the different proprietary LoRA adapters.
|
ArimaKana38/alpaca-cmkl
|
ArimaKana38
| 2023-07-10T09:33:02Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-08T11:17:05Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
Arindam75/a2c-PandaReachDense-v2
|
Arindam75
| 2023-07-10T09:27:33Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T09:24:51Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.60 +/- 0.59
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nolanaatama/stnmrshsthprkrvcv2300pchrhys
|
nolanaatama
| 2023-07-10T09:19:39Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-10T09:16:40Z |
---
license: creativeml-openrail-m
---
|
Salama1429/TTS_German_Speecht5_finetuned_voxpopuli_nl
|
Salama1429
| 2023-07-10T09:13:40Z | 326 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"nl",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-07-10T06:43:53Z |
---
language:
- nl
license: mit
tags:
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: TTS_German_Speecht5_finetuned_voxpopuli_nl
results: []
pipeline_tag: text-to-speech
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TTS_German_Speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5248 | 4.3 | 1000 | 0.4792 |
| 0.5019 | 8.61 | 2000 | 0.4663 |
| 0.4937 | 12.91 | 3000 | 0.4609 |
| 0.4896 | 17.21 | 4000 | 0.4593 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
crisU8/bert-finetuned-ner-clinical-plncmm-large-22
|
crisU8
| 2023-07-10T09:04:25Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T08:59:40Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-clinical-plncmm-large-22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-clinical-plncmm-large-22
This model is a fine-tuned version of [plncmm/beto-clinical-wl-es](https://huggingface.co/plncmm/beto-clinical-wl-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2380
- Precision: 0.7554
- Recall: 0.8271
- F1: 0.7896
- Accuracy: 0.9320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.5983 | 1.0 | 572 | 0.2405 | 0.7044 | 0.7964 | 0.7476 | 0.9227 |
| 0.1979 | 2.0 | 1144 | 0.2421 | 0.7296 | 0.8189 | 0.7717 | 0.9275 |
| 0.1406 | 3.0 | 1716 | 0.2380 | 0.7554 | 0.8271 | 0.7896 | 0.9320 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
JensTiburski/test
|
JensTiburski
| 2023-07-10T09:00:29Z | 0 | 0 | null |
[
"de",
"region:us"
] | null | 2023-07-10T08:56:13Z |
---
language:
- de
---
from diffusers import StableDiffusionLDM3DPipeline
pipe_ldm3d = StableDiffusionLDM3DPipeline.from_pretrained("Intel/ldm3d")
prompt = "A picture of a castle in the mountains"
output = pipe_ldm3d(prompt)
obj_file = output.obj
obj_file[0].save("castle_ldm3d.obj")
---
license: cc
---
|
soBeauty/xlm-roberta-base-09072023-revised_2
|
soBeauty
| 2023-07-10T08:59:49Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-09T15:31:18Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-09072023-revised_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-09072023-revised_2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.5577
- Loss: 2.2632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 2.725 | 3.85 | 500 | 0.4944 | 2.4300 |
| 2.5409 | 7.69 | 1000 | 0.5577 | 2.2632 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Aeala/Chronoboros-33b-4bit
|
Aeala
| 2023-07-10T08:48:19Z | 5 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T06:52:52Z |
4-bit GPTQ quantization of the [chronoboros-33b](https://huggingface.co/Henk717/chronoboros-33B) merge.
|
danbrown/checkpoints
|
danbrown
| 2023-07-10T08:43:17Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-10T08:20:33Z |
---
license: creativeml-openrail-m
---
This is a collection of Stable Diffusion model checkpoints, just like my other loras collection.
I may be listing it with more details here as I add the models.
The models here can be third-party checkpoints, or personal experiments.
|
SpringYung/falcon_with_10latex
|
SpringYung
| 2023-07-10T08:42:21Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-10T08:41:51Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
SpringYung/dolly_with_10examples
|
SpringYung
| 2023-07-10T08:30:27Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-10T08:30:04Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
crisU8/bert-finetuned-ner-clinical-plncmm-large-15
|
crisU8
| 2023-07-10T08:15:12Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T07:57:20Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-clinical-plncmm-large-15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-clinical-plncmm-large-15
This model is a fine-tuned version of [plncmm/beto-clinical-wl-es](https://huggingface.co/plncmm/beto-clinical-wl-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2339
- Precision: 0.7526
- Recall: 0.8282
- F1: 0.7886
- Accuracy: 0.9309
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 429 | 0.2466 | 0.6954 | 0.7958 | 0.7423 | 0.9223 |
| 0.5736 | 2.0 | 858 | 0.2380 | 0.7354 | 0.8178 | 0.7744 | 0.9264 |
| 0.1845 | 3.0 | 1287 | 0.2339 | 0.7526 | 0.8282 | 0.7886 | 0.9309 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
komo-dono/matsuoka
|
komo-dono
| 2023-07-10T08:14:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-10T08:12:42Z |
---
license: openrail
language:
- ja
tags:
- music
matsuoka 500 epoch
|
guaguale/model_kthv_vcg
|
guaguale
| 2023-07-10T08:11:44Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-09T18:50:52Z |
---
license: creativeml-openrail-m
base_model: /mmu_vcg_ssd/liuhao12/workspace/1_diffusion/models/sd-zhuxiongwei-320nodes-task3_2-0612/checkpoint-28000/
instance_prompt: a male idol sks with blonde hair, wearing a black jacket and fringes on the sides of the jacket
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - guaguale/model_kthv_vcg
This is a dreambooth model derived from /mmu_vcg_ssd/liuhao12/workspace/1_diffusion/models/sd-zhuxiongwei-320nodes-task3_2-0612/checkpoint-28000/. The weights were trained on a male idol sks with blonde hair, wearing a black jacket and fringes on the sides of the jacket using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
lloydchang/wongstein-vide-noir
|
lloydchang
| 2023-07-10T07:49:17Z | 207 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"text-generation-inference",
"en",
"dataset:amazon_us_reviews",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T07:43:16Z |
---
license: creativeml-openrail-m
datasets:
- amazon_us_reviews
language:
- en
tags:
- text-generation-inference
---
|
crisU8/bert-finetuned-ner-clinical-plncmm-large-13
|
crisU8
| 2023-07-10T07:39:09Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T07:21:14Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-clinical-plncmm-large-13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-clinical-plncmm-large-13
This model is a fine-tuned version of [plncmm/beto-clinical-wl-es](https://huggingface.co/plncmm/beto-clinical-wl-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2336
- Precision: 0.7488
- Recall: 0.8227
- F1: 0.7840
- Accuracy: 0.9308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 429 | 0.2408 | 0.7044 | 0.8123 | 0.7545 | 0.9223 |
| 0.5338 | 2.0 | 858 | 0.2382 | 0.7322 | 0.8178 | 0.7726 | 0.9264 |
| 0.1771 | 3.0 | 1287 | 0.2336 | 0.7488 | 0.8227 | 0.7840 | 0.9308 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/gpt2-concat-guten-mod-rm-ref-2k-rarity-2p5k-p13k
|
NasimB
| 2023-07-10T07:28:38Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T05:33:27Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-guten-mod-rm-ref-2k-rarity-2p5k-p13k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-guten-mod-rm-ref-2k-rarity-2p5k-p13k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1752
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7031 | 0.29 | 500 | 5.6491 |
| 5.3381 | 0.59 | 1000 | 5.2075 |
| 4.9932 | 0.88 | 1500 | 4.9530 |
| 4.7151 | 1.17 | 2000 | 4.8058 |
| 4.5556 | 1.46 | 2500 | 4.6811 |
| 4.4516 | 1.76 | 3000 | 4.5752 |
| 4.3291 | 2.05 | 3500 | 4.4930 |
| 4.1341 | 2.34 | 4000 | 4.4457 |
| 4.1006 | 2.63 | 4500 | 4.3896 |
| 4.0621 | 2.93 | 5000 | 4.3371 |
| 3.8509 | 3.22 | 5500 | 4.3335 |
| 3.8058 | 3.51 | 6000 | 4.2974 |
| 3.7835 | 3.81 | 6500 | 4.2701 |
| 3.6851 | 4.1 | 7000 | 4.2656 |
| 3.5155 | 4.39 | 7500 | 4.2594 |
| 3.5136 | 4.68 | 8000 | 4.2428 |
| 3.5037 | 4.98 | 8500 | 4.2302 |
| 3.3411 | 5.27 | 9000 | 4.2422 |
| 3.321 | 5.56 | 9500 | 4.2417 |
| 3.323 | 5.85 | 10000 | 4.2412 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
chizhikchi/sci-five-radsum23
|
chizhikchi
| 2023-07-10T07:27:40Z | 209 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"medical",
"clinical",
"en",
"dataset:MIMIC-III",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-05-02T10:22:12Z |
---
license: afl-3.0
tags:
- summarization
- t5
- medical
- clinical
language: en
datasets:
- MIMIC-III
widget:
- again noted is the large intraparenchymal hemorrhage in the posterior right frontal lobe with extension into both lateral ventricles. the degree of surrounding edema and effacement of adjacent sulci is unchanged. there is minor contralateral shift of normal midline structures. the ventricular size is unchanged. subarachnoid blood is now seen in the left frontal and parietal lobes, likely due to recirculation of the ventricular blood.
- a least two attempts were made at imaging, however, the study remains severely limited by patient motion. minimal hyperdensity tracks along a left parietal sulcus (2a:18) is equivocal for a small subarachnoid hemorhage. there is no large mass effect detected. there is no shift of normally midline structures. a minimally displaced zygomatic fracture is present (2a:9). the middle ear cavities, mastoid air cells are clear. there is extensive soft tissue swelling overlying the right frontal calvarium with swelling extending to the right preseptal soft tissues (2a:12). there is mild - moderate mucosal thickening within the ethmoid and maxillary sinuses with some fluid and fluid mucosal thickening in the sphenoid sinus.
inference:
parameters:
max_length: 350
metrics:
- rouge-l
---
# Impression section Generator For Radiology Reports 🏥
This model is is the result of participation of SINAI team in [Task 1B: Radiology Report Summarization](https://vilmedic.app/misc/bionlp23/sharedtask) at the BioNLP workshop held on ACL 2023.
The goal of this task is to foster development of automatic radiology report summarization systems and expanding their applicability by incorporating seven different modalities and anatomies in the provided data.
We propose to automate the generation of radiology impressions with "sequence-to-sequence" learning that leverages the power of publicly available pre-trained models, both general domain and biomedical domain-specific.
This repository provides access to our best-performing system that resulted from fine-tuning of [Sci-Five base](https://huggingface.co/razent/SciFive-base-Pubmed_PMC), which is T5 model trained for extra 200k steps to optimize it in the context of biomedical literature.
# Results
The official evaluation results prove that adaptation of a general-domain system for biomedical literature is beneficial for the subsequent fine-tuning for radiology report summarization task. The Table below summarizes the official scores obtained by this model during the official evaluation. Team standings re available [here](https://vilmedic.app/misc/bionlp23/leaderboard/).
| BLEU4 | ROUGE-L | BERTscore | F1-RadGraph
|-----------|--------|----------|----------|
| 017.38 | 32.32 | 55.04 | 33.96 |
# System description paper and citation
The paper with the detailed description of the system is published in the [Proceedings of the 22st Workshop on Biomedical Language Processing](https://aclanthology.org/2023.bionlp-1.53/).
BibTeX citation:
```
@inproceedings{chizhikova-etal-2023-sinai,
title = "{SINAI} at {R}ad{S}um23: Radiology Report Summarization Based on Domain-Specific Sequence-To-Sequence Transformer Model",
author = "Chizhikova, Mariia and
Diaz-Galiano, Manuel and
Urena-Lopez, L. Alfonso and
Martin-Valdivia, M. Teresa",
booktitle = "The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.bionlp-1.53",
pages = "530--534",
abstract = "This paper covers participation of the SINAI team in the shared task 1B: Radiology Report Summarization at the BioNLP workshop held on ACL 2023. Our proposal follows a sequence-to-sequence approach which leverages pre-trained multilingual general domain and monolingual biomedical domain pre-trained language models. The best performing system based on domain-specific model reached 33.96 F1RadGraph score which is the fourth best result among the challenge participants. This model was made publicly available on HuggingFace. We also describe an attempt of Proximal Policy Optimization Reinforcement Learning that was made in order to improve the factual correctness measured with F1RadGraph but did not lead to satisfactory results.",
}
```
|
zhundred/Taxi-v3
|
zhundred
| 2023-07-10T07:27:01Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T07:26:59Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="zhundred/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
zwtharry/ppo-Huggy
|
zwtharry
| 2023-07-10T07:13:03Z | 14 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-10T07:12:52Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: zwtharry/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
yuuhan/roberta-base-rte-lora-layer6-11
|
yuuhan
| 2023-07-10T07:04:43Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-07T02:32:01Z |
---
library_name: peft
---
## Training procedure
### Framework versions
RTE acc: 0.7364620938628159
- PEFT 0.4.0.dev0
|
crisU8/bert-finetuned-ner-clinical-plncmm-large-11
|
crisU8
| 2023-07-10T07:00:16Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T06:38:46Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-clinical-plncmm-large-11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-clinical-plncmm-large-11
This model is a fine-tuned version of [plncmm/beto-clinical-wl-es](https://huggingface.co/plncmm/beto-clinical-wl-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2334
- Precision: 0.7534
- Recall: 0.8216
- F1: 0.7860
- Accuracy: 0.9328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 429 | 0.2492 | 0.6813 | 0.7849 | 0.7294 | 0.9211 |
| 0.6251 | 2.0 | 858 | 0.2420 | 0.7467 | 0.8189 | 0.7812 | 0.9288 |
| 0.1942 | 3.0 | 1287 | 0.2334 | 0.7534 | 0.8216 | 0.7860 | 0.9328 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hoanghoavienvo/bert-large-uncased-detect-depression-stage-one
|
hoanghoavienvo
| 2023-07-10T06:55:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-09T22:16:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-large-uncased-detect-depression-stage-one
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-detect-depression-stage-one
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7875
- eval_accuracy: 0.752
- eval_f1: 0.8092
- eval_runtime: 112.7187
- eval_samples_per_second: 8.872
- eval_steps_per_second: 2.218
- epoch: 3.0
- step: 4506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hell1/rare-puppers
|
hell1
| 2023-07-10T06:39:55Z | 194 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T06:39:48Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.939393937587738
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu

|
charqican/nominal-groups-recognition-bert-base-spanish-wwm-cased
|
charqican
| 2023-07-10T06:37:54Z | 118 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"es",
"dataset:charqican/spanish_nominal_groups_conll2003",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T06:30:03Z |
---
language:
- es
tags:
- generated_from_trainer
datasets:
- charqican/spanish_nominal_groups_conll2003
model-index:
- name: nominal-groups-recognition-bert-base-spanish-wwm-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nominal-groups-recognition-bert-base-spanish-wwm-cased
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the charqican/spanish_nominal_groups_conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2772
- Ng Precision: 0.7140
- Ng Recall: 0.7695
- Ng F1: 0.7407
- Ng Number: 3198
- Overall Precision: 0.7140
- Overall Recall: 0.7695
- Overall F1: 0.7407
- Overall Accuracy: 0.8993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 13
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Ng Precision | Ng Recall | Ng F1 | Ng Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------:|:---------:|:------:|:---------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.3988 | 1.0 | 228 | 0.2792 | 0.7108 | 0.7577 | 0.7335 | 3198 | 0.7108 | 0.7577 | 0.7335 | 0.8935 |
| 0.2257 | 2.0 | 456 | 0.2772 | 0.7140 | 0.7695 | 0.7407 | 3198 | 0.7140 | 0.7695 | 0.7407 | 0.8993 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
crisU8/bert-finetuned-ner-clinical-plncmm-large-9
|
crisU8
| 2023-07-10T06:27:42Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T06:18:40Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-clinical-plncmm-large-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-clinical-plncmm-large-9
This model is a fine-tuned version of [plncmm/beto-clinical-wl-es](https://huggingface.co/plncmm/beto-clinical-wl-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2499
- Precision: 0.7544
- Recall: 0.8244
- F1: 0.7878
- Accuracy: 0.9331
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 429 | 0.2543 | 0.6951 | 0.7794 | 0.7348 | 0.9190 |
| 0.641 | 2.0 | 858 | 0.2443 | 0.7410 | 0.8117 | 0.7748 | 0.9273 |
| 0.1943 | 3.0 | 1287 | 0.2388 | 0.7378 | 0.8156 | 0.7748 | 0.9304 |
| 0.1211 | 4.0 | 1716 | 0.2499 | 0.7544 | 0.8244 | 0.7878 | 0.9331 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Sukmin/ppo-SnowballTarget
|
Sukmin
| 2023-07-10T06:25:07Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-10T06:22:30Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Sukmin/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jhan405/sd-class-butterflies-64
|
jhan405
| 2023-07-10T06:22:38Z | 30 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-07-10T06:21:51Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('jhan405/sd-class-butterflies-64')
image = pipeline().images[0]
image
```
|
crisU8/bert-finetuned-ner-clinical-plncmm-large-7
|
crisU8
| 2023-07-10T06:09:28Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T06:00:18Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-clinical-plncmm-large-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-clinical-plncmm-large-7
This model is a fine-tuned version of [plncmm/beto-clinical-wl-es](https://huggingface.co/plncmm/beto-clinical-wl-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2317
- Precision: 0.7503
- Recall: 0.8227
- F1: 0.7848
- Accuracy: 0.9326
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 429 | 0.2497 | 0.6860 | 0.7794 | 0.7297 | 0.9201 |
| 0.6187 | 2.0 | 858 | 0.2391 | 0.7384 | 0.8134 | 0.7741 | 0.9293 |
| 0.1936 | 3.0 | 1287 | 0.2317 | 0.7503 | 0.8227 | 0.7848 | 0.9326 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
digiplay/TEXT
|
digiplay
| 2023-07-10T06:07:16Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2023-07-10T06:00:56Z |
---
license: other
---
**TEXTUAL INVERSION**
Speech Bubble
https://civitai.com/models/103237/speech-bubble
Trigger Words:
***SpeechBubble-Test***

|
edgamer/Taxi-v3
|
edgamer
| 2023-07-10T05:50:16Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T05:34:15Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="edgamer/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Sukmin/Reinforce-PixelCopter
|
Sukmin
| 2023-07-10T05:46:30Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T03:34:22Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 35.40 +/- 26.24
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jhan405/sd-class-butterflies-32
|
jhan405
| 2023-07-10T05:41:33Z | 30 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-07-10T05:40:27Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('jhan405/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
desplesda/marian-finetuned-kde4-en-to-fr
|
desplesda
| 2023-07-10T05:39:43Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-09T14:25:59Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.86847876856917
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8559
- Bleu: 52.8685
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
edgamer/q-FrozenLake-v1-4x4-noSlippery
|
edgamer
| 2023-07-10T05:29:25Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T05:29:22Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="edgamer/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
georgeleung30/bert-finetuned-n2c2-ner
|
georgeleung30
| 2023-07-10T05:26:17Z | 103 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T05:19:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-n2c2-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-n2c2-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2116
- Precision: 0.9059
- Recall: 0.8858
- F1: 0.8958
- Accuracy: 0.9758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0545 | 1.0 | 10469 | 0.1255 | 0.8929 | 0.8933 | 0.8931 | 0.9762 |
| 0.0639 | 2.0 | 20938 | 0.1136 | 0.8933 | 0.8784 | 0.8858 | 0.9747 |
| 0.0452 | 3.0 | 31407 | 0.1221 | 0.8864 | 0.8991 | 0.8927 | 0.9753 |
| 0.0284 | 4.0 | 41876 | 0.1453 | 0.9003 | 0.8821 | 0.8911 | 0.9756 |
| 0.0269 | 5.0 | 52345 | 0.1587 | 0.9011 | 0.8934 | 0.8972 | 0.9765 |
| 0.0202 | 6.0 | 62814 | 0.1756 | 0.9190 | 0.8660 | 0.8917 | 0.9755 |
| 0.0153 | 7.0 | 73283 | 0.1818 | 0.9063 | 0.8831 | 0.8945 | 0.9757 |
| 0.0119 | 8.0 | 83752 | 0.2012 | 0.9163 | 0.8744 | 0.8948 | 0.9760 |
| 0.0122 | 9.0 | 94221 | 0.1986 | 0.9001 | 0.8908 | 0.8954 | 0.9757 |
| 0.0073 | 10.0 | 104690 | 0.2116 | 0.9059 | 0.8858 | 0.8958 | 0.9758 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.8.1+cu111
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Martinez66/Helluva_Boss_FastyDubs_AllStars
|
Martinez66
| 2023-07-10T05:14:24Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-07-10T04:06:48Z |
---
license: openrail
---
This is a Spanish version
Here you will find the majority of the voices that have participated in Helluva Boss in Spanish, in the Fasty Dubs channel below its configuration and algorithms are shown:
Epochs Type Name Voice
180 harvest Agente-1-FalloSinCovers @FalloSinCovers
175 harvest Agente-2-PatriciaAzan @PatriciaAzan
240 harvest Andrealphus-RodoBalderasLocutor @RodoBalderasLocutor
800 harvest Blitzo-Fasty-Dubs @FastyDubs
205 harvest Chazwick-Thurman-RyusakiDub @RyusakiDub
160 harvest Cletus-GeovarockFandubs @GeovarockFandubs
200 harvest Collin-Blowzyblue @blowzyblue
210 mangio-crepe 60 Crimson-Maximtru5314 @maximtru5314
250 mangio-crepe 64 Deerie-KarlyPerCastle @KarlyPerCastle
200 mangio-crepe 164 Fizzarolli-Pafoparasito @pafoparasito
300 magio-crepe 64 Keenie-UnBacconFrito @UnBacconFrito
550 harvest Loona-Hakusagiart @hakusagiart
240 magio-crepe 64 Loopty-Goopty-Pafoparasito @pafoparasito
350 crepe Lyle-Lipton-Guonejo @Guonejo
750 harvest Millie-Karu @karucovers_gt
780 harvest Moxxie-Rubecai @Rubecai
650 harvest Ozzie-Asmodeo-Cancion-Maximtru5314 @SantiagoVoiceo
Dialogos-SantiagoVoiceo @maximtru5314
420 harvest Octavia-Mary-Ciel @Jenncathvoice
200 magio-crepe 40 Policia-Guonejo @Guonejo
450 harvest Stella-Sacrefleur https://www.instagram.com/sacrefleur/
750 harvest Stolas-LaWeaAstral @LaWeaAstral
400 harvest Striker-Azzydubs5621 @azzydubs5621
600 magio-crepe 64 Verosika-Mayday-KarlyPerCastle @KarlyPerCastle
300 harvest Vortex-RodoBalderas @RodoBalderasLocutor
200 harvest Wally-Wackford-RyusakiDub @RyusakiDub
Important: If you are going to use these voices make sure you give credit to the people who brought it to life.
Give credits to the credro of the voices
:::::::::::::::::::::::::::::::::::::::::::
Copyright Disclaimer Under Section 107 of the Copyright Act 1976, allowance is made for fair use for purposes such as
criticism, comment, news reporting, teaching, scholarship, and research. Fair use is a use permitted by copyright statute that
might otherwise be infringing. Non-profit, educational or personal use tips the balance in favor of fair use.
|
crisU8/bert-finetuned-ner-clinical-plncmm-large-4
|
crisU8
| 2023-07-10T05:12:57Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T05:01:35Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-clinical-plncmm-large-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-clinical-plncmm-large-4
This model is a fine-tuned version of [plncmm/beto-clinical-wl-es](https://huggingface.co/plncmm/beto-clinical-wl-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2677
- Precision: 0.7664
- Recall: 0.8337
- F1: 0.7986
- Accuracy: 0.9369
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.6626 | 1.0 | 857 | 0.2329 | 0.7139 | 0.8068 | 0.7575 | 0.9251 |
| 0.1834 | 2.0 | 1714 | 0.2479 | 0.7246 | 0.8216 | 0.7701 | 0.9268 |
| 0.1162 | 3.0 | 2571 | 0.2504 | 0.7616 | 0.8310 | 0.7948 | 0.9336 |
| 0.0862 | 4.0 | 3428 | 0.2677 | 0.7664 | 0.8337 | 0.7986 | 0.9369 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
smangrul/peft-lora-codegen-25-guanaco-v100-colab
|
smangrul
| 2023-07-10T05:11:58Z | 9 | 4 |
peft
|
[
"peft",
"tensorboard",
"generated_from_trainer",
"base_model:Salesforce/codegen25-7b-multi_P",
"base_model:adapter:Salesforce/codegen25-7b-multi_P",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-07-08T10:24:14Z |
---
license: apache-2.0
base_model: Salesforce/codegen25-7b-multi
tags:
- generated_from_trainer
model-index:
- name: peft-lora-codgen-25-guanaco-t4-colab
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# peft-lora-codgen-25-guanaco-t4-colab
This model is a fine-tuned version of [Salesforce/codegen25-7b-multi](https://huggingface.co/Salesforce/codegen25-7b-multi) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.4.0.dev0
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Minggu/anizhchie2
|
Minggu
| 2023-07-10T04:51:49Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-10T04:48:59Z |
---
license: creativeml-openrail-m
---
|
hegelty/KcBERT-Large-finetuned-josa-2
|
hegelty
| 2023-07-10T04:45:34Z | 80 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-10T02:30:56Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: hegelty/KcBERT-Large-finetuned-josa-2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hegelty/KcBERT-Large-finetuned-josa-2
This model is a fine-tuned version of [beomi/KcBERT-Large](https://huggingface.co/beomi/KcBERT-Large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0039
- Validation Loss: 0.0000
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 81390, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0039 | 0.0000 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.9.2
- Datasets 2.13.1
- Tokenizers 0.13.3
|
crisU8/bert-finetuned-ner-clinical-plncmm-large-1
|
crisU8
| 2023-07-10T04:42:21Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T04:30:54Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-clinical-plncmm-large-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-clinical-plncmm-large-1
This model is a fine-tuned version of [plncmm/beto-clinical-wl-es](https://huggingface.co/plncmm/beto-clinical-wl-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2406
- Precision: 0.7503
- Recall: 0.8227
- F1: 0.7848
- Accuracy: 0.9318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3906 | 1.0 | 857 | 0.2271 | 0.7130 | 0.8101 | 0.7585 | 0.9253 |
| 0.1758 | 2.0 | 1714 | 0.2378 | 0.7460 | 0.8222 | 0.7822 | 0.9290 |
| 0.125 | 3.0 | 2571 | 0.2406 | 0.7503 | 0.8227 | 0.7848 | 0.9318 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Raj-Sanjay-Shah/babyLM_roberta_base_epoch_5
|
Raj-Sanjay-Shah
| 2023-07-10T04:40:28Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-10T04:15:35Z |
---
license: cc-by-nc-sa-4.0
---
|
gameofdimension/lora-cs324-length-control
|
gameofdimension
| 2023-07-10T04:38:56Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:mit",
"region:us"
] | null | 2023-06-28T15:27:59Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: lora-cs324-length-control
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora-cs324-length-control
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.5584 | 0.16 | 500 | 3.2567 |
| 3.392 | 0.32 | 1000 | 3.2473 |
| 3.3584 | 0.48 | 1500 | 3.2290 |
| 3.3115 | 0.64 | 2000 | 3.2196 |
| 3.3041 | 0.8 | 2500 | 3.2126 |
| 3.3024 | 0.96 | 3000 | 3.2118 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
bastianchinchon/nominal-groups-recognition-bert-base-spanish-wwm-cased
|
bastianchinchon
| 2023-07-10T04:34:17Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"es",
"dataset:bastianchinchon/spanish_nominal_groups_conll2003",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T04:06:46Z |
---
language:
- es
tags:
- generated_from_trainer
datasets:
- bastianchinchon/spanish_nominal_groups_conll2003
model-index:
- name: nominal-groups-recognition-bert-base-spanish-wwm-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nominal-groups-recognition-bert-base-spanish-wwm-cased
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the bastianchinchon/spanish_nominal_groups_conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2836
- Body Part Precision: 0.675
- Body Part Recall: 0.7191
- Body Part F1: 0.6964
- Body Part Number: 413
- Disease Precision: 0.7177
- Disease Recall: 0.7405
- Disease F1: 0.7289
- Disease Number: 975
- Family Member Precision: 0.8276
- Family Member Recall: 0.8
- Family Member F1: 0.8136
- Family Member Number: 30
- Medication Precision: 0.8228
- Medication Recall: 0.6989
- Medication F1: 0.7558
- Medication Number: 93
- Procedure Precision: 0.5586
- Procedure Recall: 0.5820
- Procedure F1: 0.5701
- Procedure Number: 311
- Overall Precision: 0.6864
- Overall Recall: 0.7075
- Overall F1: 0.6968
- Overall Accuracy: 0.9146
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 13
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Body Part Precision | Body Part Recall | Body Part F1 | Body Part Number | Disease Precision | Disease Recall | Disease F1 | Disease Number | Family Member Precision | Family Member Recall | Family Member F1 | Family Member Number | Medication Precision | Medication Recall | Medication F1 | Medication Number | Procedure Precision | Procedure Recall | Procedure F1 | Procedure Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------------:|:--------------------:|:----------------:|:--------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.4304 | 1.0 | 1004 | 0.2985 | 0.5620 | 0.7022 | 0.6243 | 413 | 0.7059 | 0.6944 | 0.7001 | 975 | 0.8276 | 0.8 | 0.8136 | 30 | 0.6848 | 0.6774 | 0.6811 | 93 | 0.5390 | 0.5113 | 0.5248 | 311 | 0.6415 | 0.6658 | 0.6534 | 0.9028 |
| 0.2346 | 2.0 | 2008 | 0.2836 | 0.675 | 0.7191 | 0.6964 | 413 | 0.7177 | 0.7405 | 0.7289 | 975 | 0.8276 | 0.8 | 0.8136 | 30 | 0.8228 | 0.6989 | 0.7558 | 93 | 0.5586 | 0.5820 | 0.5701 | 311 | 0.6864 | 0.7075 | 0.6968 | 0.9146 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aammari/setfit-zero-shot-classification-pbsp-p3-sev
|
aammari
| 2023-07-10T04:31:17Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-10T04:28:56Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# setfit-zero-shot-classification-pbsp-p3-sev
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("setfit-zero-shot-classification-pbsp-p3-sev")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
ghwangbo/Korean_Finetuned_Falcon
|
ghwangbo
| 2023-07-10T04:27:34Z | 2 | 1 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-06T08:32:52Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
cwiz/llama-7b-saiga-merged
|
cwiz
| 2023-07-10T04:26:24Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-09T18:56:19Z |
---
license: apache-2.0
---
[Saiga](https://huggingface.co/IlyaGusev/saiga_7b_lora) merged with [LLaMa-7b](https://huggingface.co/decapoda-research/llama-7b-hf) for further finetuning.
|
Raj-Sanjay-Shah/babyLM_roberta_base_epoch_15
|
Raj-Sanjay-Shah
| 2023-07-10T03:36:56Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-10T03:07:35Z |
---
license: cc-by-nc-nd-4.0
---
|
eugene-yang/colbertx-xlmr-large-tt-eng.zho
|
eugene-yang
| 2023-07-10T03:29:40Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"en",
"zh",
"arxiv:2201.08471",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2023-07-10T03:01:12Z |
---
license: mit
language:
- en
- zh
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- cross-language-retrieval
---
Model trained by [Suraj Nair](https://srnair.netlify.app/).
If you use the model, please cite our paper.
```bibtex
@inproceedings{colbert-x,
author = {Suraj Nair and Eugene Yang and Dawn Lawrie and Kevin Duh and Paul McNamee and Kenton Murray and James Mayfield and Douglas W. Oard},
title = {Transfer Learning Approaches for Building Cross-Language Dense Retrieval Models},
booktitle = {Proceedings of the 44th European Conference on Information Retrieval (ECIR)},
year = {2022},
url = {https://arxiv.org/abs/2201.08471}
}
```
|
casque/3DMM_V12
|
casque
| 2023-07-10T03:29:02Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-10T03:28:24Z |
---
license: creativeml-openrail-m
---
|
casque/Colored_Icons_by_vizsumit
|
casque
| 2023-07-10T03:24:54Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-10T03:23:35Z |
---
license: creativeml-openrail-m
---
|
casque/logo_v1-000012
|
casque
| 2023-07-10T03:21:55Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-10T03:20:36Z |
---
license: creativeml-openrail-m
---
|
japoople/test
|
japoople
| 2023-07-10T03:13:18Z | 0 | 0 |
nemo
|
[
"nemo",
"text-classification",
"en",
"dataset:fka/awesome-chatgpt-prompts",
"license:openrail",
"region:us"
] |
text-classification
| 2023-07-10T03:07:08Z |
---
license: openrail
datasets:
- fka/awesome-chatgpt-prompts
language:
- en
pipeline_tag: text-classification
library_name: nemo
---
|
VFiona/opus-mt-en-it-finetuned_4600-en-to-it
|
VFiona
| 2023-07-10T03:06:40Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-09T19:45:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: opus-mt-en-it-finetuned_4600-en-to-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-it-finetuned_4600-en-to-it
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-it](https://huggingface.co/Helsinki-NLP/opus-mt-en-it) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 259 | 0.5103 | 66.2359 | 28.3391 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.12.1
- Datasets 2.13.1
- Tokenizers 0.11.0
|
edgamer/ppo-LunarLander-v2
|
edgamer
| 2023-07-10T02:51:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T01:11:26Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 281.47 +/- 22.46
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NasimB/gpt2-concat-mod-datasets-rarity1-rerun
|
NasimB
| 2023-07-10T02:49:37Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T00:33:42Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-mod-datasets-rarity1-rerun
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-mod-datasets-rarity1-rerun
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0263
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7311 | 0.3 | 500 | 5.6497 |
| 5.3805 | 0.59 | 1000 | 5.2065 |
| 5.0306 | 0.89 | 1500 | 4.9574 |
| 4.7526 | 1.18 | 2000 | 4.8142 |
| 4.6058 | 1.48 | 2500 | 4.6885 |
| 4.4982 | 1.78 | 3000 | 4.5904 |
| 4.3593 | 2.07 | 3500 | 4.5261 |
| 4.185 | 2.37 | 4000 | 4.4783 |
| 4.154 | 2.66 | 4500 | 4.4233 |
| 4.1262 | 2.96 | 5000 | 4.3708 |
| 3.8986 | 3.26 | 5500 | 4.3804 |
| 3.8767 | 3.55 | 6000 | 4.3494 |
| 3.8605 | 3.85 | 6500 | 4.3124 |
| 3.7194 | 4.14 | 7000 | 4.3395 |
| 3.5981 | 4.44 | 7500 | 4.3194 |
| 3.5952 | 4.74 | 8000 | 4.3059 |
| 3.5511 | 5.03 | 8500 | 4.3089 |
| 3.3393 | 5.33 | 9000 | 4.3236 |
| 3.3388 | 5.62 | 9500 | 4.3220 |
| 3.3443 | 5.92 | 10000 | 4.3139 |
| 3.2213 | 6.22 | 10500 | 4.3304 |
| 3.1851 | 6.51 | 11000 | 4.3313 |
| 3.1911 | 6.81 | 11500 | 4.3317 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
nomsgadded/textual_inversion_van_gogh
|
nomsgadded
| 2023-07-10T02:47:57Z | 48 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-10T01:24:49Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - nomsgadded/textual_inversion_van_gogh
These are textual inversion adaption weights for CompVis/stable-diffusion-v1-4. You can find some example images in the following.
|
crisU8/bert-finetuned-ner-clinical-plncmm-8
|
crisU8
| 2023-07-10T02:41:04Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T02:31:37Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-clinical-plncmm-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-clinical-plncmm-8
This model is a fine-tuned version of [plncmm/beto-clinical-wl-es](https://huggingface.co/plncmm/beto-clinical-wl-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2568
- Precision: 0.7476
- Recall: 0.8063
- F1: 0.7758
- Accuracy: 0.9277
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.6628 | 1.0 | 502 | 0.2590 | 0.6791 | 0.7711 | 0.7222 | 0.9103 |
| 0.2168 | 2.0 | 1004 | 0.2309 | 0.7243 | 0.7975 | 0.7591 | 0.9238 |
| 0.1301 | 3.0 | 1506 | 0.2568 | 0.7476 | 0.8063 | 0.7758 | 0.9277 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
harrycools/q-FrozenLake-v1-4x4-noSlippery
|
harrycools
| 2023-07-10T02:40:33Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T02:40:31Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="harrycools/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
eatrero/distilbert-base-uncased-finetuned-emotion
|
eatrero
| 2023-07-10T02:22:31Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-09T19:15:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9234507249341903
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2222
- Accuracy: 0.9235
- F1: 0.9235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3302 | 0.9005 | 0.8959 |
| No log | 2.0 | 500 | 0.2222 | 0.9235 | 0.9235 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
douy/T5-11B-Ctrl-Simplification
|
douy
| 2023-07-10T02:22:04Z | 24 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"arxiv:2212.09739",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-10T01:44:27Z |
---
license: apache-2.0
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper:** [https://arxiv.org/abs/2212.09739](https://arxiv.org/abs/2212.09739)
- **Demo:** [http://lens-score.com](http://lens-score.com)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Chickenfish/Monica_stable_v1
|
Chickenfish
| 2023-07-10T02:19:37Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-10T02:17:36Z |
---
license: creativeml-openrail-m
---
|
JBJoyce/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
|
JBJoyce
| 2023-07-10T01:51:39Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-10T00:20:46Z |
---
license: bsd-3-clause
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4626
- Accuracy: 0.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6832 | 1.0 | 113 | 0.6136 | 0.79 |
| 0.3528 | 2.0 | 226 | 0.6350 | 0.77 |
| 0.178 | 3.0 | 339 | 0.7414 | 0.8 |
| 0.142 | 4.0 | 452 | 0.5234 | 0.84 |
| 0.1209 | 5.0 | 565 | 0.5176 | 0.88 |
| 0.0004 | 6.0 | 678 | 0.4160 | 0.88 |
| 0.0002 | 7.0 | 791 | 0.4798 | 0.9 |
| 0.0002 | 8.0 | 904 | 0.4693 | 0.89 |
| 0.1201 | 9.0 | 1017 | 0.4636 | 0.9 |
| 0.0002 | 10.0 | 1130 | 0.4626 | 0.9 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
EleutherAI/pythia-2.8b-v0
|
EleutherAI
| 2023-07-10T01:35:41Z | 1,819 | 5 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"pythia_v0",
"en",
"dataset:the_pile",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-20T03:56:10Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-2.8B
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-2.8B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-2.8B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-2.8B to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-2.8B.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
|
EleutherAI/pythia-70m-deduped-v0
|
EleutherAI
| 2023-07-10T01:32:46Z | 933 | 8 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"pythia_v0",
"en",
"dataset:EleutherAI/the_pile_deduplicated",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-01T00:24:53Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-70M-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-70M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-70M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-70M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-70M-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-70M-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-70M-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-70M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-70M-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
|
EleutherAI/pythia-410m-deduped-v0
|
EleutherAI
| 2023-07-10T01:31:39Z | 860 | 6 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"pythia_v0",
"en",
"dataset:EleutherAI/the_pile_deduplicated",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-01T00:48:44Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-410M-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-410M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-410M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-410M-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-410M-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-410M-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-410M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-410M-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
|
EleutherAI/pythia-160m-deduped-v0
|
EleutherAI
| 2023-07-10T01:30:40Z | 847 | 6 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"pythia_v0",
"en",
"dataset:EleutherAI/the_pile_deduplicated",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-18T02:59:41Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-160M-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-160M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-160M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-160M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-160M-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-160M-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-160M-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-160M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-160M-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
|
EleutherAI/pythia-6.9b-deduped-v0
|
EleutherAI
| 2023-07-10T01:30:05Z | 833 | 20 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"pythia_v0",
"en",
"dataset:EleutherAI/the_pile_deduplicated",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-18T03:04:37Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-6.9B-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-6.9B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-6.9B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-6.9B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-6.9B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-6.9B-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-6.9B-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-6.9B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-6.9B-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 13B | 12B | 11,846,072,320 | 11,327,027,200 |
</figure>
|
havingAfish/CDA
|
havingAfish
| 2023-07-10T01:00:15Z | 0 | 0 | null |
[
"dataset:movie_rationales",
"region:us"
] | null | 2023-07-10T00:59:26Z |
---
datasets:
- movie_rationales
---
|
crisU8/bert-finetuned-ner-clinical-trials-2
|
crisU8
| 2023-07-10T00:48:07Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T00:29:03Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-clinical-trials-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-clinical-trials-2
This model is a fine-tuned version of [lcampillos/roberta-es-clinical-trials-ner](https://huggingface.co/lcampillos/roberta-es-clinical-trials-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3516
- Precision: 0.7329
- Recall: 0.7755
- F1: 0.7536
- Accuracy: 0.9141
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.158 | 1.0 | 502 | 0.3028 | 0.7104 | 0.7700 | 0.7390 | 0.9144 |
| 0.1061 | 2.0 | 1004 | 0.3578 | 0.7020 | 0.7772 | 0.7377 | 0.9094 |
| 0.123 | 3.0 | 1506 | 0.3296 | 0.7237 | 0.7750 | 0.7485 | 0.9139 |
| 0.1007 | 4.0 | 2008 | 0.3516 | 0.7329 | 0.7755 | 0.7536 | 0.9141 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
LongshenOu/lyric-trans-en2zh
|
LongshenOu
| 2023-07-10T00:16:10Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-07-10T00:07:39Z |
---
license: cc-by-nc-sa-4.0
---
|
NasimB/gpt2-concat-guten-rarity-all-mod-repetition-iorder-5k-p5k
|
NasimB
| 2023-07-10T00:11:09Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-09T22:15:33Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-guten-rarity-all-mod-repetition-iorder-5k-p5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-guten-rarity-all-mod-repetition-iorder-5k-p5k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7049 | 0.3 | 500 | 5.6332 |
| 5.3603 | 0.59 | 1000 | 5.2033 |
| 5.0063 | 0.89 | 1500 | 4.9509 |
| 4.7286 | 1.18 | 2000 | 4.7987 |
| 4.5752 | 1.48 | 2500 | 4.6728 |
| 4.4634 | 1.78 | 3000 | 4.5663 |
| 4.3226 | 2.07 | 3500 | 4.4933 |
| 4.1472 | 2.37 | 4000 | 4.4458 |
| 4.1157 | 2.67 | 4500 | 4.3824 |
| 4.0756 | 2.96 | 5000 | 4.3282 |
| 3.8402 | 3.26 | 5500 | 4.3258 |
| 3.8183 | 3.55 | 6000 | 4.2905 |
| 3.7968 | 3.85 | 6500 | 4.2597 |
| 3.6538 | 4.15 | 7000 | 4.2640 |
| 3.5239 | 4.44 | 7500 | 4.2506 |
| 3.5235 | 4.74 | 8000 | 4.2375 |
| 3.4943 | 5.04 | 8500 | 4.2350 |
| 3.3327 | 5.33 | 9000 | 4.2405 |
| 3.3319 | 5.63 | 9500 | 4.2383 |
| 3.3325 | 5.92 | 10000 | 4.2378 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.