modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-27 18:27:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-27 18:22:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
YakovElm/Qt_15_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T11:35:53Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T11:35:18Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt_15_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt_15_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0002
- Train Accuracy: 1.0
- Validation Loss: 0.4444
- Validation Accuracy: 0.9505
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0261 | 0.9997 | 0.3635 | 0.9505 | 0 |
| 0.0005 | 1.0 | 0.4156 | 0.9505 | 1 |
| 0.0002 | 1.0 | 0.4444 | 0.9505 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Bodolaz/Unit-4.2-final3
|
Bodolaz
| 2023-06-28T11:26:20Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T11:26:15Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Unit-4.2-final3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 40.00 +/- 24.24
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
NickyNicky/MPT-7b-instruct-QLora-8Bits-Peft-train_eli5-1_Epoch-V8
|
NickyNicky
| 2023-06-28T11:09:07Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-28T11:09:03Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
wyklq/falcon-40b-gptq
|
wyklq
| 2023-06-28T11:08:48Z | 9 | 2 |
transformers
|
[
"transformers",
"safetensors",
"RefinedWeb",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-28T10:58:03Z |
Strict copy of https://huggingface.co/tiiuae/falcon-40b but quantized with GPTQ (on wikitext-2, 4bits, groupsize=128).
Intended to be used with https://github.com/huggingface/text-generation-inference
```
model=huggingface/falcon-40b-gptq
num_shard=2
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:0.8 --model-id $model --num-shard $num_shard --quantize gptq
```
For full configuration and usage outside docker, please refer to https://github.com/huggingface/text-generation-inference
|
robrecht/ppo-Huggy
|
robrecht
| 2023-06-28T11:08:26Z | 11 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-28T09:32:24Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: robrecht/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
fatcat22/a2c-PandaReachDense-v2
|
fatcat22
| 2023-06-28T11:00:08Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T10:59:08Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.70 +/- 0.20
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nayralabs/test
|
nayralabs
| 2023-06-28T10:51:33Z | 0 | 0 |
diffusers
|
[
"diffusers",
"code",
"text-to-image",
"en",
"dataset:fka/awesome-chatgpt-prompts",
"dataset:GAIR/lima",
"arxiv:1910.09700",
"license:bsd",
"region:us"
] |
text-to-image
| 2023-06-28T10:49:16Z |
---
license: bsd
datasets:
- fka/awesome-chatgpt-prompts
- GAIR/lima
language:
- en
metrics:
- accuracy
library_name: diffusers
pipeline_tag: text-to-image
tags:
- code
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NickyNicky/MPT-7b-instruct-QLora-8Bits-Peft-train_eli5-1_Epoch-V7
|
NickyNicky
| 2023-06-28T10:50:38Z | 5 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-28T10:50:35Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
BaoKien/albert_squadv2
|
BaoKien
| 2023-06-28T10:34:57Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"albert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-28T09:52:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: albert_squadv2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert_squadv2
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 4.7365 |
| No log | 2.0 | 10 | 4.3391 |
| No log | 3.0 | 15 | 4.1889 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mnavas/bert-finetuned-token-reqsolvgencat
|
mnavas
| 2023-06-28T10:34:36Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-28T09:43:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: beto-finetuned-token-reqsolvgencat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beto-finetuned-token-reqsolvgencat
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0573
- Precision: 0.1111
- Recall: 0.5
- F1: 0.1818
- Accuracy: 0.7765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.8551 | 1.0 | 3 | 1.1478 | 0.0 | 0.0 | 0.0 | 0.4696 |
| 1.4354 | 2.0 | 6 | 0.7403 | 0.0 | 0.0 | 0.0 | 0.6088 |
| 0.6532 | 3.0 | 9 | 0.6900 | 0.0 | 0.0 | 0.0 | 0.6010 |
| 0.406 | 4.0 | 12 | 2.0318 | 0.0 | 0.0 | 0.0 | 0.5284 |
| 1.3514 | 5.0 | 15 | 1.3364 | 0.0 | 0.0 | 0.0 | 0.6186 |
| 0.419 | 6.0 | 18 | 0.9463 | 0.0 | 0.0 | 0.0 | 0.6 |
| 0.4816 | 7.0 | 21 | 0.9409 | 0.0 | 0.0 | 0.0 | 0.6314 |
| 0.8621 | 8.0 | 24 | 0.9034 | 0.0 | 0.0 | 0.0 | 0.7010 |
| 0.2981 | 9.0 | 27 | 0.9617 | 0.0 | 0.0 | 0.0 | 0.6039 |
| 0.5882 | 10.0 | 30 | 0.5289 | 0.0 | 0.0 | 0.0 | 0.8 |
| 0.4327 | 11.0 | 33 | 0.6573 | 0.2 | 0.5 | 0.2857 | 0.7882 |
| 0.2076 | 12.0 | 36 | 0.9207 | 0.0 | 0.0 | 0.0 | 0.6784 |
| 0.1454 | 13.0 | 39 | 0.7729 | 0.0 | 0.0 | 0.0 | 0.7882 |
| 0.2339 | 14.0 | 42 | 0.9067 | 0.0 | 0.0 | 0.0 | 0.7608 |
| 0.2281 | 15.0 | 45 | 0.4732 | 0.0 | 0.0 | 0.0 | 0.7892 |
| 0.1058 | 16.0 | 48 | 0.8219 | 0.0667 | 0.5 | 0.1176 | 0.7961 |
| 0.0577 | 17.0 | 51 | 0.9698 | 0.1429 | 0.5 | 0.2222 | 0.8147 |
| 0.0354 | 18.0 | 54 | 1.3158 | 0.0714 | 0.5 | 0.125 | 0.7549 |
| 0.0258 | 19.0 | 57 | 2.2414 | 0.1667 | 0.5 | 0.25 | 0.6941 |
| 0.0456 | 20.0 | 60 | 1.2596 | 0.0667 | 0.5 | 0.1176 | 0.7520 |
| 0.0339 | 21.0 | 63 | 1.8941 | 0.1667 | 0.5 | 0.25 | 0.7069 |
| 0.02 | 22.0 | 66 | 1.0201 | 0.1 | 0.5 | 0.1667 | 0.7941 |
| 0.0257 | 23.0 | 69 | 1.4716 | 0.0909 | 0.5 | 0.1538 | 0.7716 |
| 0.0148 | 24.0 | 72 | 2.6551 | 0.0 | 0.0 | 0.0 | 0.6853 |
| 0.0257 | 25.0 | 75 | 1.7904 | 0.125 | 0.5 | 0.2 | 0.7324 |
| 0.0183 | 26.0 | 78 | 2.2579 | 0.2 | 0.5 | 0.2857 | 0.7078 |
| 0.0218 | 27.0 | 81 | 1.3332 | 0.1667 | 0.5 | 0.25 | 0.7137 |
| 0.0148 | 28.0 | 84 | 0.7506 | 0.0769 | 0.5 | 0.1333 | 0.8696 |
| 0.0081 | 29.0 | 87 | 2.2165 | 0.1667 | 0.5 | 0.25 | 0.7059 |
| 0.0098 | 30.0 | 90 | 2.5291 | 0.2 | 0.5 | 0.2857 | 0.7 |
| 0.0072 | 31.0 | 93 | 2.5877 | 0.25 | 0.5 | 0.3333 | 0.6990 |
| 0.0162 | 32.0 | 96 | 2.7463 | 0.1667 | 0.5 | 0.25 | 0.6941 |
| 0.0073 | 33.0 | 99 | 2.2100 | 0.1429 | 0.5 | 0.2222 | 0.7176 |
| 0.0057 | 34.0 | 102 | 1.5509 | 0.0769 | 0.5 | 0.1333 | 0.7814 |
| 0.0077 | 35.0 | 105 | 1.8449 | 0.1111 | 0.5 | 0.1818 | 0.7578 |
| 0.0071 | 36.0 | 108 | 2.3808 | 0.3333 | 0.5 | 0.4 | 0.7127 |
| 0.0063 | 37.0 | 111 | 1.1188 | 0.125 | 0.5 | 0.2 | 0.8598 |
| 0.0067 | 38.0 | 114 | 1.7108 | 0.0 | 0.0 | 0.0 | 0.7892 |
| 0.0063 | 39.0 | 117 | 1.8995 | 0.1 | 0.5 | 0.1667 | 0.7363 |
| 0.0086 | 40.0 | 120 | 2.0472 | 0.1667 | 0.5 | 0.25 | 0.7137 |
| 0.0028 | 41.0 | 123 | 1.8879 | 0.1111 | 0.5 | 0.1818 | 0.7245 |
| 0.0084 | 42.0 | 126 | 1.9545 | 0.1667 | 0.5 | 0.25 | 0.7157 |
| 0.0018 | 43.0 | 129 | 2.5274 | 0.0 | 0.0 | 0.0 | 0.6922 |
| 0.0031 | 44.0 | 132 | 2.0330 | 0.0909 | 0.5 | 0.1538 | 0.7245 |
| 0.0034 | 45.0 | 135 | 0.9381 | 0.1 | 0.5 | 0.1667 | 0.8569 |
| 0.0016 | 46.0 | 138 | 0.6307 | 0.1111 | 0.5 | 0.1818 | 0.8980 |
| 0.0011 | 47.0 | 141 | 0.7823 | 0.0667 | 0.5 | 0.1176 | 0.8676 |
| 0.0021 | 48.0 | 144 | 1.1034 | 0.0909 | 0.5 | 0.1538 | 0.8373 |
| 0.0025 | 49.0 | 147 | 1.4482 | 0.2 | 0.5 | 0.2857 | 0.8108 |
| 0.0023 | 50.0 | 150 | 1.5541 | 0.1667 | 0.5 | 0.25 | 0.8069 |
| 0.0007 | 51.0 | 153 | 1.7164 | 0.1667 | 0.5 | 0.25 | 0.8010 |
| 0.0002 | 52.0 | 156 | 1.8291 | 0.1667 | 0.5 | 0.25 | 0.7941 |
| 0.0002 | 53.0 | 159 | 1.8918 | 0.1429 | 0.5 | 0.2222 | 0.7892 |
| 0.0016 | 54.0 | 162 | 1.8814 | 0.2 | 0.5 | 0.2857 | 0.7961 |
| 0.0013 | 55.0 | 165 | 1.8920 | 0.1 | 0.5 | 0.1667 | 0.7873 |
| 0.0009 | 56.0 | 168 | 1.8802 | 0.1 | 0.5 | 0.1667 | 0.7843 |
| 0.0017 | 57.0 | 171 | 1.7839 | 0.1111 | 0.5 | 0.1818 | 0.8 |
| 0.0028 | 58.0 | 174 | 1.7409 | 0.1 | 0.5 | 0.1667 | 0.7961 |
| 0.0029 | 59.0 | 177 | 2.1381 | 0.1 | 0.5 | 0.1667 | 0.7196 |
| 0.0008 | 60.0 | 180 | 2.4554 | 0.125 | 0.5 | 0.2 | 0.6922 |
| 0.0012 | 61.0 | 183 | 2.7071 | 0.125 | 0.5 | 0.2 | 0.6676 |
| 0.0011 | 62.0 | 186 | 2.5624 | 0.125 | 0.5 | 0.2 | 0.6814 |
| 0.0009 | 63.0 | 189 | 2.1029 | 0.2 | 0.5 | 0.2857 | 0.7363 |
| 0.0011 | 64.0 | 192 | 1.5984 | 0.125 | 0.5 | 0.2 | 0.7804 |
| 0.0005 | 65.0 | 195 | 1.4155 | 0.0909 | 0.5 | 0.1538 | 0.8088 |
| 0.001 | 66.0 | 198 | 1.3276 | 0.1111 | 0.5 | 0.1818 | 0.8225 |
| 0.0002 | 67.0 | 201 | 1.2844 | 0.1429 | 0.5 | 0.2222 | 0.8255 |
| 0.0076 | 68.0 | 204 | 1.2601 | 0.1667 | 0.5 | 0.25 | 0.8275 |
| 0.0003 | 69.0 | 207 | 1.3359 | 0.1429 | 0.5 | 0.2222 | 0.8343 |
| 0.0027 | 70.0 | 210 | 1.2452 | 0.25 | 0.5 | 0.3333 | 0.8363 |
| 0.004 | 71.0 | 213 | 1.7228 | 0.1 | 0.5 | 0.1667 | 0.7775 |
| 0.0004 | 72.0 | 216 | 2.7878 | 0.5 | 0.5 | 0.5 | 0.6804 |
| 1.1257 | 73.0 | 219 | 2.5785 | 0.5 | 0.5 | 0.5 | 0.6804 |
| 0.0209 | 74.0 | 222 | 0.1081 | 0.1111 | 0.5 | 0.1818 | 0.9775 |
| 0.0486 | 75.0 | 225 | 1.5039 | 0.0667 | 0.5 | 0.1176 | 0.7784 |
| 0.0012 | 76.0 | 228 | 3.2975 | 0.0 | 0.0 | 0.0 | 0.6098 |
| 0.0929 | 77.0 | 231 | 2.4796 | 0.3333 | 0.5 | 0.4 | 0.6618 |
| 0.0006 | 78.0 | 234 | 1.6117 | 0.125 | 0.5 | 0.2 | 0.7951 |
| 0.0077 | 79.0 | 237 | 1.8651 | 0.125 | 0.5 | 0.2 | 0.7127 |
| 0.0032 | 80.0 | 240 | 2.0187 | 0.1667 | 0.5 | 0.25 | 0.6873 |
| 0.0007 | 81.0 | 243 | 1.9540 | 0.1 | 0.5 | 0.1667 | 0.6902 |
| 0.0011 | 82.0 | 246 | 1.8161 | 0.0714 | 0.5 | 0.125 | 0.7275 |
| 0.0009 | 83.0 | 249 | 1.7688 | 0.1111 | 0.5 | 0.1818 | 0.7657 |
| 0.0006 | 84.0 | 252 | 1.7025 | 0.1667 | 0.5 | 0.25 | 0.7912 |
| 0.0003 | 85.0 | 255 | 1.6924 | 0.2 | 0.5 | 0.2857 | 0.7980 |
| 0.0004 | 86.0 | 258 | 1.7019 | 0.1667 | 0.5 | 0.25 | 0.7980 |
| 0.0008 | 87.0 | 261 | 1.7386 | 0.2 | 0.5 | 0.2857 | 0.7971 |
| 0.0002 | 88.0 | 264 | 1.7882 | 0.1667 | 0.5 | 0.25 | 0.7961 |
| 0.0001 | 89.0 | 267 | 1.8405 | 0.0833 | 0.5 | 0.1429 | 0.7833 |
| 0.0004 | 90.0 | 270 | 1.8319 | 0.125 | 0.5 | 0.2 | 0.7931 |
| 0.0003 | 91.0 | 273 | 1.8520 | 0.125 | 0.5 | 0.2 | 0.7931 |
| 0.0001 | 92.0 | 276 | 1.8876 | 0.0909 | 0.5 | 0.1538 | 0.7873 |
| 0.0001 | 93.0 | 279 | 1.9220 | 0.1 | 0.5 | 0.1667 | 0.7814 |
| 0.0004 | 94.0 | 282 | 2.0870 | 0.1 | 0.5 | 0.1667 | 0.7333 |
| 0.0004 | 95.0 | 285 | 2.1469 | 0.1111 | 0.5 | 0.1818 | 0.7225 |
| 0.0009 | 96.0 | 288 | 2.1726 | 0.1429 | 0.5 | 0.2222 | 0.7196 |
| 0.0007 | 97.0 | 291 | 1.9247 | 0.1 | 0.5 | 0.1667 | 0.7667 |
| 0.0012 | 98.0 | 294 | 1.8172 | 0.1111 | 0.5 | 0.1818 | 0.7902 |
| 0.0002 | 99.0 | 297 | 1.7789 | 0.1 | 0.5 | 0.1667 | 0.7980 |
| 0.0109 | 100.0 | 300 | 1.8386 | 0.0909 | 0.5 | 0.1538 | 0.7843 |
| 0.0008 | 101.0 | 303 | 2.0800 | 0.1 | 0.5 | 0.1667 | 0.7314 |
| 0.0002 | 102.0 | 306 | 2.5253 | 0.3333 | 0.5 | 0.4 | 0.6804 |
| 0.001 | 103.0 | 309 | 2.2149 | 0.1 | 0.5 | 0.1667 | 0.7118 |
| 0.0001 | 104.0 | 312 | 1.8607 | 0.1 | 0.5 | 0.1667 | 0.8020 |
| 0.0002 | 105.0 | 315 | 1.7786 | 0.1 | 0.5 | 0.1667 | 0.8127 |
| 0.0009 | 106.0 | 318 | 1.8432 | 0.125 | 0.5 | 0.2 | 0.8078 |
| 0.0001 | 107.0 | 321 | 2.0339 | 0.1111 | 0.5 | 0.1818 | 0.7784 |
| 0.0001 | 108.0 | 324 | 2.2628 | 0.0769 | 0.5 | 0.1333 | 0.7392 |
| 0.0007 | 109.0 | 327 | 2.3649 | 0.0714 | 0.5 | 0.125 | 0.7265 |
| 0.0001 | 110.0 | 330 | 2.4330 | 0.0714 | 0.5 | 0.125 | 0.7137 |
| 0.0001 | 111.0 | 333 | 2.4918 | 0.1 | 0.5 | 0.1667 | 0.7059 |
| 0.0002 | 112.0 | 336 | 2.2853 | 0.0909 | 0.5 | 0.1538 | 0.7647 |
| 0.0001 | 113.0 | 339 | 2.1945 | 0.1 | 0.5 | 0.1667 | 0.7765 |
| 0.0001 | 114.0 | 342 | 2.1446 | 0.1111 | 0.5 | 0.1818 | 0.7824 |
| 0.0008 | 115.0 | 345 | 2.1037 | 0.1111 | 0.5 | 0.1818 | 0.7863 |
| 0.0001 | 116.0 | 348 | 2.0753 | 0.1 | 0.5 | 0.1667 | 0.7902 |
| 0.0001 | 117.0 | 351 | 2.0617 | 0.1 | 0.5 | 0.1667 | 0.7902 |
| 0.0014 | 118.0 | 354 | 1.9908 | 0.1111 | 0.5 | 0.1818 | 0.7961 |
| 0.0009 | 119.0 | 357 | 1.9134 | 0.1111 | 0.5 | 0.1818 | 0.8 |
| 0.0009 | 120.0 | 360 | 1.8651 | 0.1111 | 0.5 | 0.1818 | 0.8020 |
| 0.0002 | 121.0 | 363 | 1.8480 | 0.1111 | 0.5 | 0.1818 | 0.8 |
| 0.0001 | 122.0 | 366 | 1.8558 | 0.0909 | 0.5 | 0.1538 | 0.7961 |
| 0.0001 | 123.0 | 369 | 1.8727 | 0.1111 | 0.5 | 0.1818 | 0.7892 |
| 0.0001 | 124.0 | 372 | 1.8951 | 0.125 | 0.5 | 0.2 | 0.7863 |
| 0.0007 | 125.0 | 375 | 1.8971 | 0.1667 | 0.5 | 0.25 | 0.7833 |
| 0.0011 | 126.0 | 378 | 1.8479 | 0.1667 | 0.5 | 0.25 | 0.7814 |
| 0.0002 | 127.0 | 381 | 1.7985 | 0.1429 | 0.5 | 0.2222 | 0.7804 |
| 0.0001 | 128.0 | 384 | 1.7712 | 0.125 | 0.5 | 0.2 | 0.7794 |
| 0.0001 | 129.0 | 387 | 1.7929 | 0.1111 | 0.5 | 0.1818 | 0.7755 |
| 0.0002 | 130.0 | 390 | 1.8551 | 0.1111 | 0.5 | 0.1818 | 0.7725 |
| 0.0001 | 131.0 | 393 | 1.9030 | 0.125 | 0.5 | 0.2 | 0.7716 |
| 0.0001 | 132.0 | 396 | 1.9421 | 0.125 | 0.5 | 0.2 | 0.7686 |
| 0.0009 | 133.0 | 399 | 1.9440 | 0.125 | 0.5 | 0.2 | 0.7686 |
| 0.0001 | 134.0 | 402 | 1.9455 | 0.125 | 0.5 | 0.2 | 0.7686 |
| 0.0001 | 135.0 | 405 | 1.9527 | 0.125 | 0.5 | 0.2 | 0.7686 |
| 0.0004 | 136.0 | 408 | 1.9493 | 0.125 | 0.5 | 0.2 | 0.7706 |
| 0.0001 | 137.0 | 411 | 1.9432 | 0.125 | 0.5 | 0.2 | 0.7706 |
| 0.0001 | 138.0 | 414 | 1.9491 | 0.125 | 0.5 | 0.2 | 0.7706 |
| 0.0002 | 139.0 | 417 | 1.9434 | 0.1 | 0.5 | 0.1667 | 0.7735 |
| 0.0006 | 140.0 | 420 | 1.9363 | 0.1111 | 0.5 | 0.1818 | 0.7784 |
| 0.0001 | 141.0 | 423 | 1.9330 | 0.1111 | 0.5 | 0.1818 | 0.7804 |
| 0.0001 | 142.0 | 426 | 1.9334 | 0.1111 | 0.5 | 0.1818 | 0.7833 |
| 0.0001 | 143.0 | 429 | 1.9366 | 0.125 | 0.5 | 0.2 | 0.7843 |
| 0.0001 | 144.0 | 432 | 1.9417 | 0.1111 | 0.5 | 0.1818 | 0.7843 |
| 0.0001 | 145.0 | 435 | 1.9467 | 0.1111 | 0.5 | 0.1818 | 0.7843 |
| 0.0001 | 146.0 | 438 | 1.9518 | 0.1111 | 0.5 | 0.1818 | 0.7843 |
| 0.0001 | 147.0 | 441 | 1.9574 | 0.1111 | 0.5 | 0.1818 | 0.7853 |
| 0.0001 | 148.0 | 444 | 1.9635 | 0.1111 | 0.5 | 0.1818 | 0.7853 |
| 0.0003 | 149.0 | 447 | 1.9643 | 0.1111 | 0.5 | 0.1818 | 0.7853 |
| 0.0001 | 150.0 | 450 | 1.9664 | 0.125 | 0.5 | 0.2 | 0.7853 |
| 0.0008 | 151.0 | 453 | 1.9830 | 0.125 | 0.5 | 0.2 | 0.7843 |
| 0.0001 | 152.0 | 456 | 1.9974 | 0.125 | 0.5 | 0.2 | 0.7824 |
| 0.0001 | 153.0 | 459 | 2.0083 | 0.125 | 0.5 | 0.2 | 0.7824 |
| 0.0007 | 154.0 | 462 | 2.0066 | 0.1111 | 0.5 | 0.1818 | 0.7833 |
| 0.0001 | 155.0 | 465 | 2.0047 | 0.1111 | 0.5 | 0.1818 | 0.7833 |
| 0.0001 | 156.0 | 468 | 2.0042 | 0.1111 | 0.5 | 0.1818 | 0.7833 |
| 0.0001 | 157.0 | 471 | 2.0075 | 0.1111 | 0.5 | 0.1818 | 0.7833 |
| 0.0008 | 158.0 | 474 | 2.0217 | 0.1111 | 0.5 | 0.1818 | 0.7824 |
| 0.0001 | 159.0 | 477 | 2.0326 | 0.125 | 0.5 | 0.2 | 0.7814 |
| 0.0001 | 160.0 | 480 | 2.0422 | 0.125 | 0.5 | 0.2 | 0.7814 |
| 0.0001 | 161.0 | 483 | 2.0493 | 0.125 | 0.5 | 0.2 | 0.7804 |
| 0.0001 | 162.0 | 486 | 2.0568 | 0.125 | 0.5 | 0.2 | 0.7804 |
| 0.0002 | 163.0 | 489 | 2.0818 | 0.125 | 0.5 | 0.2 | 0.7755 |
| 0.0001 | 164.0 | 492 | 2.0997 | 0.125 | 0.5 | 0.2 | 0.7755 |
| 0.0001 | 165.0 | 495 | 2.1127 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0001 | 166.0 | 498 | 2.1226 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0001 | 167.0 | 501 | 2.1311 | 0.1111 | 0.5 | 0.1818 | 0.7716 |
| 0.0001 | 168.0 | 504 | 2.1377 | 0.1111 | 0.5 | 0.1818 | 0.7716 |
| 0.0003 | 169.0 | 507 | 2.1296 | 0.1111 | 0.5 | 0.1818 | 0.7716 |
| 0.0001 | 170.0 | 510 | 2.1180 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0 | 171.0 | 513 | 2.1095 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0 | 172.0 | 516 | 2.1035 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0001 | 173.0 | 519 | 2.0999 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0 | 174.0 | 522 | 2.0984 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0004 | 175.0 | 525 | 2.0930 | 0.125 | 0.5 | 0.2 | 0.7755 |
| 0.0 | 176.0 | 528 | 2.0874 | 0.125 | 0.5 | 0.2 | 0.7755 |
| 0.006 | 177.0 | 531 | 2.0807 | 0.125 | 0.5 | 0.2 | 0.7755 |
| 0.0001 | 178.0 | 534 | 2.0717 | 0.125 | 0.5 | 0.2 | 0.7755 |
| 0.0001 | 179.0 | 537 | 2.0721 | 0.125 | 0.5 | 0.2 | 0.7755 |
| 0.0001 | 180.0 | 540 | 2.0731 | 0.125 | 0.5 | 0.2 | 0.7755 |
| 0.001 | 181.0 | 543 | 2.0712 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0001 | 182.0 | 546 | 2.0700 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0001 | 183.0 | 549 | 2.0692 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0001 | 184.0 | 552 | 2.0622 | 0.125 | 0.5 | 0.2 | 0.7755 |
| 0.0006 | 185.0 | 555 | 2.0582 | 0.125 | 0.5 | 0.2 | 0.7755 |
| 0.0001 | 186.0 | 558 | 2.0564 | 0.125 | 0.5 | 0.2 | 0.7755 |
| 0.0001 | 187.0 | 561 | 2.0562 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0001 | 188.0 | 564 | 2.0571 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0001 | 189.0 | 567 | 2.0587 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0001 | 190.0 | 570 | 2.0597 | 0.1111 | 0.5 | 0.1818 | 0.7735 |
| 0.0041 | 191.0 | 573 | 2.0555 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0002 | 192.0 | 576 | 2.0473 | 0.1111 | 0.5 | 0.1818 | 0.7765 |
| 0.0001 | 193.0 | 579 | 2.0480 | 0.1111 | 0.5 | 0.1818 | 0.7765 |
| 0.0002 | 194.0 | 582 | 2.0507 | 0.1111 | 0.5 | 0.1818 | 0.7765 |
| 0.0001 | 195.0 | 585 | 2.0541 | 0.1111 | 0.5 | 0.1818 | 0.7765 |
| 0.0001 | 196.0 | 588 | 2.0562 | 0.1111 | 0.5 | 0.1818 | 0.7765 |
| 0.0001 | 197.0 | 591 | 2.0586 | 0.1111 | 0.5 | 0.1818 | 0.7765 |
| 0.0001 | 198.0 | 594 | 2.0596 | 0.1111 | 0.5 | 0.1818 | 0.7765 |
| 0.0001 | 199.0 | 597 | 2.0604 | 0.1111 | 0.5 | 0.1818 | 0.7765 |
| 0.0003 | 200.0 | 600 | 2.0573 | 0.1111 | 0.5 | 0.1818 | 0.7765 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Tri1/12-18-finetuned-eng-to-para
|
Tri1
| 2023-06-28T10:34:13Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-27T16:09:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: 12-18-finetuned-eng-to-para
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 12-18-finetuned-eng-to-para
This model is a fine-tuned version of [Tri1/6-12-finetuned-eng-to-para](https://huggingface.co/Tri1/6-12-finetuned-eng-to-para) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0339
- Bleu: 35.7233
- Gen Len: 22.56
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.0441 | 1.0 | 6250 | 0.0420 | 31.1327 | 22.08 |
| 0.0412 | 2.0 | 12500 | 0.0397 | 32.419 | 22.72 |
| 0.0396 | 3.0 | 18750 | 0.0371 | 35.4708 | 20.96 |
| 0.0385 | 4.0 | 25000 | 0.0356 | 34.6037 | 21.12 |
| 0.0377 | 5.0 | 31250 | 0.0343 | 35.7663 | 22.24 |
| 0.0381 | 6.0 | 37500 | 0.0339 | 35.7233 | 22.56 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
NasimB/gpt2-dp-cl-length
|
NasimB
| 2023-06-28T10:24:24Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-28T07:55:52Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-dp-cl-length
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-dp-cl-length
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6691 | 0.26 | 500 | 5.8270 |
| 5.2775 | 0.53 | 1000 | 5.4588 |
| 4.9444 | 0.79 | 1500 | 5.2410 |
| 4.7094 | 1.05 | 2000 | 5.1417 |
| 4.5245 | 1.31 | 2500 | 4.9932 |
| 4.4306 | 1.58 | 3000 | 4.9406 |
| 4.3594 | 1.84 | 3500 | 4.9073 |
| 4.2194 | 2.1 | 4000 | 4.8631 |
| 4.0996 | 2.36 | 4500 | 4.8316 |
| 4.0865 | 2.63 | 5000 | 4.7686 |
| 4.05 | 2.89 | 5500 | 4.7494 |
| 3.8858 | 3.15 | 6000 | 4.7704 |
| 3.814 | 3.42 | 6500 | 4.6998 |
| 3.8032 | 3.68 | 7000 | 4.6793 |
| 3.7885 | 3.94 | 7500 | 4.6676 |
| 3.5761 | 4.2 | 8000 | 4.6765 |
| 3.5336 | 4.47 | 8500 | 4.6792 |
| 3.535 | 4.73 | 9000 | 4.6749 |
| 3.5307 | 4.99 | 9500 | 4.6553 |
| 3.2929 | 5.25 | 10000 | 4.7027 |
| 3.2874 | 5.52 | 10500 | 4.6926 |
| 3.2883 | 5.78 | 11000 | 4.6927 |
| 3.2606 | 6.04 | 11500 | 4.7025 |
| 3.1356 | 6.31 | 12000 | 4.7098 |
| 3.1392 | 6.57 | 12500 | 4.7123 |
| 3.1374 | 6.83 | 13000 | 4.7161 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
YakovElm/Qt_10_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T10:22:56Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T10:22:21Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt_10_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt_10_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0001
- Train Accuracy: 1.0
- Validation Loss: 0.5572
- Validation Accuracy: 0.9416
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0351 | 0.9905 | 0.4427 | 0.9416 | 0 |
| 0.0003 | 1.0 | 0.5263 | 0.9416 | 1 |
| 0.0001 | 1.0 | 0.5572 | 0.9416 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
rawkul/q.taxi.v3
|
rawkul
| 2023-06-28T10:20:40Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T09:44:41Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q.taxi.v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="rawkul/q.taxi.v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
KamilAin/bart-base-booksum
|
KamilAin
| 2023-06-28T10:19:50Z | 129 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"booksum",
"summary",
"summarization",
"book",
"en",
"dataset:kmfoda/booksum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-05-23T13:46:59Z |
---
language: en
license: apache-2.0
tags:
- booksum
- summary
- summarization
- book
metrics:
- rouge
widget:
- text: "In the dead night, Frodo lay in a dream without light. Then he saw the young moon rising; under its thin light there loomed before him a black wall of rock, pierced by a dark arch like a great gate. It seemed to Frodo that he was lifted up, and passing over he saw that the rock-wall was a circle of hills, and that within it was a plain, and in the midst of the plain stood a pinnacle of stone, like a vast tower but not made by hands. On its top stood the figure of a man. The moon as it rose seemed to hang for a moment above his head and glistened in his white hair as the wind stirred it. Up from the dark plain below came the crying of fell voices, and the howling of many wolves. Suddenly a shadow, like the shape of great wings, passed across the moon. The figure lifted his arms and a light flashed from the staff that he wielded. A mighty eagle swept down and bore him away. The voices wailed and the wolves yammered. There was a noise like a strong wind blowing, and on it was borne the sound of hoofs, galloping, galloping, galloping from the East. ‘Black Riders!’ thought Frodo as he wakened, with the sound of the hoofs still echoing in his mind. He wondered if he would ever again have the courage to leave the safety of these stone walls. He lay motionless, still listening; but all was now silent, and at last he turned and fell asleep again or wandered into some other unremembered dream."
example_title: "book example"
datasets:
- kmfoda/booksum
---
# BART-base-Booksum
This is a BART-base model fine-tuned on a BookSum dataset
- **Use cases:** book summarization, general text summarization.
- This is a [`https://huggingface.co/facebook/bart-base`](https://huggingface.co/facebook/bart-base), fine-tuned for five epochs
|
dev-senolys/camembert_base_finetunned_one_thema_balanced_5_epochs
|
dev-senolys
| 2023-06-28T10:00:36Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"camembert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T08:45:28Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: camembert_base_finetunned_one_thema_balanced_5_epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert_base_finetunned_one_thema_balanced_5_epochs
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 116 | 2.3340 |
| No log | 2.0 | 232 | 2.0439 |
| No log | 3.0 | 348 | 1.8438 |
| No log | 4.0 | 464 | 1.8124 |
| 1.9931 | 5.0 | 580 | 1.7778 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
KYac/falcon-finetuned
|
KYac
| 2023-06-28T09:50:17Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-28T09:50:14Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
Zhibinhong/checkpoints_new
|
Zhibinhong
| 2023-06-28T09:45:08Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-28T09:03:53Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Zhibinhong/checkpoints_new
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
gozu888/Envit5-tuned
|
gozu888
| 2023-06-28T09:42:05Z | 24 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"translation",
"en-vi",
"vi-en",
"en",
"vi",
"dataset:mt_eng_vietnamese",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-01-06T14:21:02Z |
---
license: apache-2.0
datasets:
- mt_eng_vietnamese
language:
- en
- vi
tags:
- translation
- t5
- en-vi
- vi-en
---
# gozu888/Envit5-tuned
This model is a fine-tuned version of VietAI/envit5-translation on the mt_eng_vietnamese dataset. It achieves the following results on the evaluation set:
- Val Loss: 0.8422
- BLEU: 25.14
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** gozu888
- **Model type:** T5
- **Finetuned from model [optional]:** [VietAI/envit5-translation](https://huggingface.co/VietAI/envit5-translation)
### Hyperparameters
The following hyperparameters were used during training:
- Repo: [simpleT5](https://github.com/Shivanandroy/simpleT5)
- Dataset: [mt_eng_vietnamese](https://huggingface.co/datasets/mt_eng_vietnamese) + custom (~40%)
```python
source_max_token_len=128
target_max_token_len=128
batch_size=32
max_epochs=8
```
### Examples
+ This fine-tuned model, which was trained using only 128 token lengths, is most effective at sentence-by-sentence translation.
```json
[
"en: I do not believe a human being can be creative.",
"en: If we perceive creation around us with a certain profoundness, we can imitate in many different ways, in permutations and combinations, and seem creative in society, but actually, we are not really creative.",
"en: Everything that can be created has already been done in creation.",
"en: We are clever craftsmen at the most.",
"en: If you define the word \“creativity\” as really creating something – whether you make a movie, paint something, build a building, speak, or whatever else – this is not really creative – it is clever imitation.",
"en: Because we have paid attention to different aspects of life, we are able to imitate in ways that others have not thought possible.",
]
```
```json
[
"vi: Tôi không tin rằng một con người có thể sáng tạo.",
"vi: Nếu chúng ta nhận biết được sáng tạo xung quanh chúng ta với một thâm sâu nào đó, chúng ta có thể bắt chước trong nhiều cách khác nhau, trong những hoán vị và những kết hợp, và dường như sáng tạo trong xã hội, nhưng thực ra, chúng ta không thực sự sáng tạo. ",
"vi: Mọi thứ có thể được tạo ra đều đã được thực hiện trong sáng tạo.",
"vi: Chúng ta là những thợ thủ công thông minh nhất.",
"vi: Nếu bạn định nghĩa từ \"sáng tạo\" là thực sự sáng tạo ra một cái gì đó - cho dù bạn làm một bộ phim, vẽ một cái gì đó, xây dựng một toà nhà, nói chuyện, hoặc bất cứ điều gì khác - đây không thực sự là sáng tạo - nó là sự bắt chước thông minh.",
"vi: Bởi vì chúng ta đã chú tâm đến những khía cạnh khác nhau của cuộc sống, chúng ta có thể bắt chước theo những cách mà những người khác không nghĩ là có thể.",
]
```
|
Alexis47240/Modelingwork
|
Alexis47240
| 2023-06-28T09:39:48Z | 0 | 0 | null |
[
"fr",
"dataset:fka/awesome-chatgpt-prompts",
"license:openrail",
"region:us"
] | null | 2023-06-28T09:39:07Z |
---
license: openrail
datasets:
- fka/awesome-chatgpt-prompts
language:
- fr
---
|
rawkul/q-FrozenLake-v1-4x4-noSlippery
|
rawkul
| 2023-06-28T09:26:09Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T09:26:06Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rawkul/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Nara-Lab/nallm-polyglot-ko-1.3b-base
|
Nara-Lab
| 2023-06-28T09:24:15Z | 2,272 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"ko",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T01:12:03Z |
---
license: mit
language:
- ko
---
NA-LLM(나름)은 나라지식정보가 개발한 한국어 Large Language Model (LLM) 입니다.
https://github.com/Nara-Information/NA-LLM
|
jensvw/lundar
|
jensvw
| 2023-06-28T09:13:37Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T09:13:17Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 201.77 +/- 100.58
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
elementoffear/ppo-Huggy
|
elementoffear
| 2023-06-28T09:12:35Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-28T09:12:29Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: elementoffear/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
NickyNicky/MPT-7b-instruct-QLora-8Bits-Peft-train_eli5-1_Epoch-V2
|
NickyNicky
| 2023-06-28T09:06:08Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-28T09:06:04Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
hztang/BART-large-base-custom
|
hztang
| 2023-06-28T08:57:46Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-28T01:48:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: BART-large-base-custom
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BART-large-base-custom
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 238 | 0.6719 | 24.2424 | 19.2095 | 23.2835 | 23.9833 | 20.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.12.1
- Datasets 2.13.1
- Tokenizers 0.11.0
|
NickyNicky/MPT-7b-instruct-QLora-8Bits-Peft-train_eli5-1_Epoch
|
NickyNicky
| 2023-06-28T08:52:05Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-28T08:52:01Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Matthijs/mms-tts-abp
|
Matthijs
| 2023-06-28T08:45:05Z | 108 | 1 |
transformers
|
[
"transformers",
"pytorch",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-06-27T14:00:22Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS) : Text-to-Speech Models
This repository contains the **Ayta, Abellen (abp)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage
Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html).
## Usage
Using this checkpoint from Hugging Face Transformers:
```python
from transformers import VitsModel, VitsMmsTokenizer
import torch
model = VitsModel.from_pretrained("Matthijs/mms-tts-abp")
tokenizer = VitsMmsTokenizer.from_pretrained("Matthijs/mms-tts-abp")
text = "some example text in the Ayta, Abellen language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs)
from IPython.display import Audio
Audio(output.audio[0], rate=16000)
```
Note: For certain checkpoints, the input text must be converted to the Latin alphabet first using the
[uroman](https://github.com/isi-nlp/uroman) tool.
## Model credits
This model was developed by Vineel Pratap et al. and is licensed as **CC-BY-NC 4.0**
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
|
cupcakeDriveby/q-taxi-v3
|
cupcakeDriveby
| 2023-06-28T08:39:16Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T08:39:14Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="cupcakeDriveby/q-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mszpro/CoreML_StableDiffusion-v-1-5_ControlledUnet_SplitEinsum
|
mszpro
| 2023-06-28T08:17:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-28T08:13:17Z |
# Stable Diffusion 1.5 ControlledUnet file for CoreML
This is the compiled ControlledUNet file of Stable Diffusion 1.5, used for CoreML image generation with ControlNet.
It accepts 12 additional_residual inputs.
This is compiled for SPLIT EINSUM, which means it also works for iPhone and iPad using Apple Neural Engine.
|
MU-NLPC/calc-baseline-t5-xl
|
MU-NLPC
| 2023-06-28T08:10:50Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-24T09:44:44Z |
This is a baseline model for our [calculator-assisted models](https://huggingface.co/models?search=emnlp2023)
trained on a mixture of all our [Calc-X datasets](https://huggingface.co/datasets?search=emnlp2023).
See the corresponding paper for details.
The reported results can be reproduced by using [evaluation script](https://github.com/emnlp2023sub/gadgets/blob/65e24e810cf5ea20aceb8a3c8ddbc19f035ab694/examples/test_calc.py)
from the project repository.
|
MU-NLPC/calc-baseline-t5-large
|
MU-NLPC
| 2023-06-28T08:09:37Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-23T15:31:51Z |
This is a baseline model for our [calculator-assisted models](https://huggingface.co/models?search=emnlp2023)
trained on a mixture of all our [Calc-X datasets](https://huggingface.co/datasets?search=emnlp2023).
See the corresponding paper for details.
The reported results can be reproduced by using [evaluation script](https://github.com/emnlp2023sub/gadgets/blob/65e24e810cf5ea20aceb8a3c8ddbc19f035ab694/examples/test_calc.py)
from the project repository.
|
Yorth/dolly-lora
|
Yorth
| 2023-06-28T08:07:48Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-28T08:07:46Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
YakovElm/MariaDB_20_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T07:59:13Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T07:58:38Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariaDB_20_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaDB_20_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0008
- Train Accuracy: 1.0
- Validation Loss: 0.2337
- Validation Accuracy: 0.9698
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0627 | 0.9992 | 0.1862 | 0.9698 | 0 |
| 0.0016 | 1.0 | 0.2186 | 0.9698 | 1 |
| 0.0008 | 1.0 | 0.2337 | 0.9698 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Assem-Ihab/trainingthemodel3
|
Assem-Ihab
| 2023-06-28T07:53:07Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-28T07:39:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: trainingthemodel3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trainingthemodel3
This model is a fine-tuned version of [abdalrahmanshahrour/AraBART-summ](https://huggingface.co/abdalrahmanshahrour/AraBART-summ) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6417
- Rouge1: 0.1136
- Rouge2: 0.0429
- Rougel: 0.0938
- Rougelsum: 0.0936
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 31 | 2.9297 | 0.1141 | 0.0449 | 0.0941 | 0.0942 | 20.0 |
| No log | 2.0 | 62 | 2.7345 | 0.1099 | 0.0426 | 0.0908 | 0.0908 | 20.0 |
| No log | 3.0 | 93 | 2.6680 | 0.1123 | 0.0428 | 0.093 | 0.0929 | 20.0 |
| No log | 4.0 | 124 | 2.6417 | 0.1136 | 0.0429 | 0.0938 | 0.0936 | 20.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
OverShifted/ppo-LunarLander-v2
|
OverShifted
| 2023-06-28T07:48:00Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T07:47:36Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 230.07 +/- 24.56
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YakovElm/Hyperledger_5_BERT_Over_Sampling
|
YakovElm
| 2023-06-28T07:47:59Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T07:47:23Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger_5_BERT_Over_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger_5_BERT_Over_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0898
- Train Accuracy: 0.9727
- Validation Loss: 0.8739
- Validation Accuracy: 0.7521
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.6014 | 0.6783 | 0.5787 | 0.7168 | 0 |
| 0.2834 | 0.8865 | 0.7050 | 0.7687 | 1 |
| 0.0898 | 0.9727 | 0.8739 | 0.7521 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
yhna/Reinforce-pixelcopter
|
yhna
| 2023-06-28T07:43:10Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T07:43:05Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 44.60 +/- 27.12
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
YakovElm/MariaDB_15_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T07:34:27Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T07:33:52Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariaDB_15_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaDB_15_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0009
- Train Accuracy: 1.0
- Validation Loss: 0.3045
- Validation Accuracy: 0.9598
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0769 | 0.9849 | 0.2493 | 0.9598 | 0 |
| 0.0017 | 1.0 | 0.2868 | 0.9598 | 1 |
| 0.0009 | 1.0 | 0.3045 | 0.9598 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
nolanaatama/rmfbtsrvcv1tgrll14
|
nolanaatama
| 2023-06-28T07:30:49Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-28T07:26:26Z |
---
license: creativeml-openrail-m
---
|
swardiantara/drone-sentiment
|
swardiantara
| 2023-06-28T07:25:12Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-17T06:24:08Z |
---
license: mit
language:
- en
metrics:
- accuracy
- precision
- recall
- f1
pipeline_tag: text-classification
widget:
- text: "Battery temperature is below 15 degrees Celsius. Warm up the battery temperature to above 25 degree Celsius to ensure a safe flight."
example_title: "Negative Sentiment"
- text: "Aircraft is returning to the Home Point. Minimum RTH Altitude is 30m. You can reset the RTH Altitude in Remote Controller Settings after cancelling RTH if necessary."
example_title: "Positive Sentiment"
---
|
swardiantara/drone-term-extractor
|
swardiantara
| 2023-06-28T07:22:35Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"drone",
"drone forensics",
"named entity recognition",
"en",
"license:gpl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-23T09:37:03Z |
---
license: gpl
language:
- en
metrics:
- accuracy
- precision
- recall
- f1
pipeline_tag: token-classification
tags:
- drone
- drone forensics
- named entity recognition
widget:
- text: "Compass abnormal. Solution: 1. Ensure there are no magnets or metal objects near the aircraft. The ground or walls may contain metal. Move away from sources of interference before attempting flight. 2. Calibrate Compass Before Takeoff"
example_title: "Example 1"
- text: "The flight attitude angle is larger in Sport mode. The gimbal will rotate when the aircraft starts or stops. Use Normal mode if required for stable shooting"
example_title: "Example 2"
- text: "Motor speed error. Land or return to home promptly. After powering off the aircraft, replace the propeller on the beeping ESC. If the issue persists, contact DJI Support"
example_title: "Example 3"
- text: "GPS signal low. Aircraft unable to auto hover and takeoff restricted. Move to environment with adequate light. Unlocking takeoff restrictions not recommended"
example_title: "Example 4"
---
|
Shubham09/falcon_p2
|
Shubham09
| 2023-06-28T07:22:31Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-28T07:12:50Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
kejolong/etomisa
|
kejolong
| 2023-06-28T07:21:52Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-28T04:29:02Z |
---
license: creativeml-openrail-m
---
|
eocampop/distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
|
eocampop
| 2023-06-28T06:48:14Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-27T15:54:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2025
- F1: 0.5425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4868 | 1.0 | 766 | 1.2168 | 0.5419 |
| 0.7607 | 2.0 | 1532 | 1.2025 | 0.5425 |
| 0.603 | 3.0 | 2298 | 1.1996 | 0.5414 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
YakovElm/MariaDB_5_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T06:45:06Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T06:44:27Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariaDB_5_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaDB_5_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0014
- Train Accuracy: 1.0
- Validation Loss: 0.4829
- Validation Accuracy: 0.9322
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0727 | 0.9808 | 0.3510 | 0.9322 | 0 |
| 0.0037 | 1.0 | 0.4349 | 0.9322 | 1 |
| 0.0014 | 1.0 | 0.4829 | 0.9322 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Broonion/RLcourse-unit2-q-FrozenLake-v1-4x4-noSlippery
|
Broonion
| 2023-06-28T06:43:48Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T06:43:46Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Broonion/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AlonCohen/social-groups-ner-first-try
|
AlonCohen
| 2023-06-28T06:24:31Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-12-29T08:08:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: social-groups-ner-first-try
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# social-groups-ner-first-try
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Sidharthkr/MPT-7b-chat-GGML
|
Sidharthkr
| 2023-06-28T06:20:54Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-06-28T05:41:12Z |
Compatibilty
These files are not compatible with llama.cpp.
Currently they can be used with:
KoboldCpp, a powerful inference engine based on llama.cpp, with good UI: KoboldCpp
The ctransformers Python library, which includes LangChain support: ctransformers
The GPT4All-UI which uses ctransformers: GPT4All-UI
rustformers' llm
The example mpt binary provided with ggml
|
YakovElm/Jira_20_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T06:20:21Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T06:19:46Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira_20_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira_20_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0014
- Train Accuracy: 1.0
- Validation Loss: 0.4661
- Validation Accuracy: 0.9338
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1013 | 0.9780 | 0.3644 | 0.9338 | 0 |
| 0.0030 | 1.0 | 0.4356 | 0.9338 | 1 |
| 0.0014 | 1.0 | 0.4661 | 0.9338 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
YakovElm/Jira_15_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T06:00:12Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T05:59:37Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira_15_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira_15_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0012
- Train Accuracy: 1.0
- Validation Loss: 3.4399
- Validation Accuracy: 0.5205
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1521 | 0.9675 | 2.5971 | 0.5205 | 0 |
| 0.0028 | 1.0 | 3.2106 | 0.5205 | 1 |
| 0.0012 | 1.0 | 3.4399 | 0.5205 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dyedream/Reinfore-CardPole
|
dyedream
| 2023-06-28T05:43:02Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T05:42:50Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinfore-CardPole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
YakovElm/Jira_5_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T05:20:05Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T05:19:25Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira_5_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira_5_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0017
- Train Accuracy: 1.0
- Validation Loss: 3.5486
- Validation Accuracy: 0.4858
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1443 | 0.9748 | 2.5459 | 0.4858 | 0 |
| 0.0042 | 1.0 | 3.2782 | 0.4858 | 1 |
| 0.0017 | 1.0 | 3.5486 | 0.4858 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
pawankmrs/bert-finetuned-ner
|
pawankmrs
| 2023-06-28T05:08:03Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-28T04:49:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9365684001324942
- name: Recall
type: recall
value: 0.9516997643890945
- name: F1
type: f1
value: 0.9440734557595992
- name: Accuracy
type: accuracy
value: 0.9866221227997881
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0652
- Precision: 0.9366
- Recall: 0.9517
- F1: 0.9441
- Accuracy: 0.9866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0704 | 1.0 | 1756 | 0.0695 | 0.9159 | 0.9349 | 0.9253 | 0.9822 |
| 0.0229 | 2.0 | 3512 | 0.0656 | 0.9345 | 0.9505 | 0.9424 | 0.9867 |
| 0.0116 | 3.0 | 5268 | 0.0652 | 0.9366 | 0.9517 | 0.9441 | 0.9866 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
allman/bert-base-uncased-finetuned-MeIA-AnalisisDeSentimientos
|
allman
| 2023-06-28T05:07:52Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T04:02:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-base-uncased-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-MeIA-AnalisisDeSentimientos
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0784
- F1: 0.5207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1946 | 1.0 | 766 | 1.1920 | 0.4331 |
| 1.0715 | 2.0 | 1532 | 1.0936 | 0.4909 |
| 1.0093 | 3.0 | 2298 | 1.0583 | 0.5116 |
| 0.8907 | 4.0 | 3064 | 1.0784 | 0.5207 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Ventalik/fias_NER
|
Ventalik
| 2023-06-28T04:56:54Z | 139 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-18T16:04:55Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: fias_NER
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fias_NER
This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0011
- eval_precision: 0.9993
- eval_recall: 0.9994
- eval_f1: 0.9994
- eval_accuracy: 0.9996
- eval_runtime: 783.8012
- eval_samples_per_second: 2551.667
- eval_steps_per_second: 19.935
- epoch: 1.12
- step: 1200000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000000
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
eliztmq/q-taxi
|
eliztmq
| 2023-06-28T04:56:12Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T04:56:10Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="eliztmq/q-taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Hansr/Checkpoint
|
Hansr
| 2023-06-28T04:47:37Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-21T14:29:07Z |
---
license: creativeml-openrail-m
---
|
Poonnnnnnnn/type-prediction-transformer
|
Poonnnnnnnn
| 2023-06-28T04:44:24Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"camembert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-12T07:21:22Z |
---
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: type-prediction-transformer
results: []
widget:
- text: "ถนนผุพังทำให้เกิดเสียงดังเวลารถวิ่ง"
- text: "ขี่มอไซค์บนทางเท้ามันจะเกินปุยมุ้ย"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# type-prediction-transformer
This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0497
- F1: 0.8651
- Roc Auc: 0.9260
- Accuracy: 0.8208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 149 | 0.0812 | 0.8070 | 0.8677 | 0.7588 |
| No log | 2.0 | 298 | 0.0591 | 0.8585 | 0.9064 | 0.8141 |
| No log | 3.0 | 447 | 0.0493 | 0.8719 | 0.9144 | 0.8258 |
| 0.0886 | 4.0 | 596 | 0.0506 | 0.8614 | 0.9222 | 0.8090 |
| 0.0886 | 5.0 | 745 | 0.0487 | 0.8683 | 0.9255 | 0.8174 |
| 0.0886 | 6.0 | 894 | 0.0506 | 0.8693 | 0.9291 | 0.8191 |
| 0.0254 | 7.0 | 1043 | 0.0519 | 0.8619 | 0.9307 | 0.8090 |
| 0.0254 | 8.0 | 1192 | 0.0497 | 0.8651 | 0.9260 | 0.8208 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
|
YakovElm/IntelDAOS_15_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T04:39:11Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T04:38:33Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: IntelDAOS_15_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# IntelDAOS_15_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0015
- Train Accuracy: 1.0
- Validation Loss: 0.8058
- Validation Accuracy: 0.8859
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1334 | 0.9790 | 0.5877 | 0.8859 | 0 |
| 0.0037 | 1.0 | 0.7378 | 0.8859 | 1 |
| 0.0015 | 1.0 | 0.8058 | 0.8859 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
limcheekin/fastchat-t5-3b-ct2
|
limcheekin
| 2023-06-28T04:36:38Z | 14 | 2 |
transformers
|
[
"transformers",
"ctranslate2",
"fastchat-t5-3b",
"quantization",
"int8",
"text2text-generation",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-31T06:51:36Z |
---
license: apache-2.0
language:
- en
tags:
- ctranslate2
- fastchat-t5-3b
- quantization
- int8
pipeline_tag: text2text-generation
---
# Model Card for FastChat-T5 3B Q8
The model is quantized version of the [lmsys/fastchat-t5-3b-v1.0](https://huggingface.co/lmsys/fastchat-t5-3b-v1.0) with int8 quantization.
## Model Details
### Model Description
The model being quantized using [CTranslate2](https://opennmt.net/CTranslate2/) with the following command:
```
ct2-transformers-converter --model lmsys/fastchat-t5-3b --output_dir lmsys/fastchat-t5-3b-ct2 --copy_files generation_config.json added_tokens.json tokenizer_config.json special_tokens_map.json spiece.model --quantization int8 --force --low_cpu_mem_usage
```
If you want to perform the quantization yourself, you need to install the following dependencies:
```
pip install -qU ctranslate2 transformers[torch] sentencepiece accelerate
```
- **Shared by:** Lim Chee Kin
- **License:** Apache 2.0
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import ctranslate2
import transformers
translator = ctranslate2.Translator("limcheekin/fastchat-t5-3b-ct2")
tokenizer = transformers.AutoTokenizer.from_pretrained("limcheekin/fastchat-t5-3b-ct2")
input_text = "translate English to German: The house is wonderful."
input_tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(input_text))
results = translator.translate_batch([input_tokens])
output_tokens = results[0].hypotheses[0]
output_text = tokenizer.decode(tokenizer.convert_tokens_to_ids(output_tokens))
print(output_text)
```
The code is taken from https://opennmt.net/CTranslate2/guides/transformers.html#t5.
The key method of the code above is `translate_batch`, you can find out [its supported parameters here](https://opennmt.net/CTranslate2/python/ctranslate2.Translator.html#ctranslate2.Translator.translate_batch).
|
yihyeji/hanbok_q
|
yihyeji
| 2023-06-28T04:33:59Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-28T04:06:26Z |
---
license: creativeml-openrail-m
base_model: /workspace/data3/model_checkpoints/DIFFUSION_DB/Diffusion_models/diffusers/v15/chilloutmix_NiPrunedFp16Fix/
instance_prompt: a photo of 1 girl wearing hanbok_q
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - yihyeji/hanbok_q
These are LoRA adaption weights for /workspace/data3/model_checkpoints/DIFFUSION_DB/Diffusion_models/diffusers/v15/chilloutmix_NiPrunedFp16Fix/. The weights were trained on a photo of 1 girl wearing hanbok_q using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
PritamReddy/test-demo
|
PritamReddy
| 2023-06-28T04:31:57Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-27T17:01:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: test-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-demo
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3257
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 7.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 1 | 4.4727 | 0.0 | 0.0 | 0.0 | 0.0 | 7.6667 |
| No log | 2.0 | 2 | 4.4727 | 0.0 | 0.0 | 0.0 | 0.0 | 7.6667 |
| No log | 3.0 | 3 | 4.4727 | 0.0 | 0.0 | 0.0 | 0.0 | 7.6667 |
| No log | 4.0 | 4 | 4.3257 | 0.0 | 0.0 | 0.0 | 0.0 | 7.6667 |
| No log | 5.0 | 5 | 4.3257 | 0.0 | 0.0 | 0.0 | 0.0 | 7.6667 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
YakovElm/IntelDAOS_10_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T04:18:00Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T04:17:15Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: IntelDAOS_10_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# IntelDAOS_10_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0019
- Train Accuracy: 1.0
- Validation Loss: 0.8522
- Validation Accuracy: 0.8739
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1680 | 0.9670 | 0.5869 | 0.8739 | 0 |
| 0.0055 | 1.0 | 0.7770 | 0.8739 | 1 |
| 0.0019 | 1.0 | 0.8522 | 0.8739 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
w11wo/malaysian-distilbert-small
|
w11wo
| 2023-06-28T04:04:55Z | 133 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"distilbert",
"fill-mask",
"malaysian-distilbert-small",
"ms",
"dataset:oscar",
"arxiv:1910.01108",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: ms
tags:
- malaysian-distilbert-small
license: mit
datasets:
- oscar
widget:
- text: "Hari ini adalah hari yang [MASK]!"
---
## Malaysian DistilBERT Small
Malaysian DistilBERT Small is a masked language model based on the [DistilBERT model](https://arxiv.org/abs/1910.01108). It was trained on the [OSCAR](https://huggingface.co/datasets/oscar) dataset, specifically the `unshuffled_original_ms` subset.
The model was originally HuggingFace's pretrained [English DistilBERT model](https://huggingface.co/distilbert-base-uncased) and is later fine-tuned on the Malaysian dataset. It achieved a perplexity of 10.33 on the validation dataset (20% of the dataset). Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) written by [Sylvain Gugger](https://github.com/sgugger), and [fine-tuning tutorial notebook](https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb) written by [Pierre Guillou](https://huggingface.co/pierreguillou).
Hugging Face's [Transformers](https://huggingface.co/transformers) library was used to train the model -- utilizing the base DistilBERT model and their `Trainer` class. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|------------------------------|---------|------------------|----------------------------------------|
| `malaysian-distilbert-small` | 66M | DistilBERT Small | OSCAR `unshuffled_original_ms` Dataset |
## Evaluation Results
The model was trained for 1 epoch and the following is the final result once the training ended.
| train loss | valid loss | perplexity | total time |
|------------|------------|------------|------------|
| 2.476 | 2.336 | 10.33 | 0:40:05 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "w11wo/malaysian-distilbert-small"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Henry adalah seorang lelaki yang tinggal di [MASK].")
```
### Feature Extraction in PyTorch
```python
from transformers import DistilBertModel, DistilBertTokenizerFast
pretrained_name = "w11wo/malaysian-distilbert-small"
model = DistilBertModel.from_pretrained(pretrained_name)
tokenizer = DistilBertTokenizerFast.from_pretrained(pretrained_name)
prompt = "Bolehkah anda [MASK] Bahasa Melayu?"
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do consider the biases which came from the OSCAR dataset that may be carried over into the results of this model.
## Author
Malaysian DistilBERT Small was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
|
loghai/q-FrozenLake-v1-4x4-noSlippery
|
loghai
| 2023-06-28T03:50:04Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T03:50:01Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="loghai/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ALPHONSE28/SEMANA10_SINTENTICOS
|
ALPHONSE28
| 2023-06-28T03:45:32Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T03:15:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: SEMANA10_SINTENTICOS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SEMANA10_SINTENTICOS
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3103
- Accuracy: 0.9048
- F1: 0.9259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
chaowu/Reinforce-Pixelcopter-PLE-v0
|
chaowu
| 2023-06-28T03:36:42Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T03:00:11Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 67.30 +/- 46.66
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
eliztmq/q-FrozenLake-v1-4x4-noSlippery
|
eliztmq
| 2023-06-28T03:22:30Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T03:22:27Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="eliztmq/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ALPHONSE28/SEMANA10_2
|
ALPHONSE28
| 2023-06-28T03:13:42Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-27T03:55:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: SEMANA10_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SEMANA10_2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3581
- Accuracy: 0.88
- F1: 0.9189
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jdawnduan/dqn-SpaceInvadersNoFrameskip-v4
|
jdawnduan
| 2023-06-28T02:53:13Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T02:52:39Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 591.50 +/- 212.53
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jdawnduan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jdawnduan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jdawnduan
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
YakovElm/Hyperledger_15_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T02:36:55Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T02:36:19Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger_15_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger_15_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0004
- Train Accuracy: 1.0
- Validation Loss: 1.0086
- Validation Accuracy: 0.8807
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0356 | 0.9948 | 0.8275 | 0.8807 | 0 |
| 0.0008 | 1.0 | 0.9377 | 0.8807 | 1 |
| 0.0004 | 1.0 | 1.0086 | 0.8807 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
YakovElm/Apache_15_BERT_Over_Sampling
|
YakovElm
| 2023-06-28T02:30:17Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T02:29:40Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache_15_BERT_Over_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache_15_BERT_Over_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0167
- Train Accuracy: 0.9951
- Validation Loss: 0.7266
- Validation Accuracy: 0.8892
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2926 | 0.8609 | 0.5467 | 0.8651 | 0 |
| 0.0318 | 0.9910 | 0.7866 | 0.8220 | 1 |
| 0.0167 | 0.9951 | 0.7266 | 0.8892 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Geonmo/CLIP-Giga-config-fixed
|
Geonmo
| 2023-06-28T02:19:09Z | 146 | 0 |
transformers
|
[
"transformers",
"pytorch",
"clip",
"zero-shot-image-classification",
"license:mit",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2023-06-28T01:18:15Z |
---
license: mit
---
It will be removed when the config in `laion/CLIP-ViT-bigG-14-laion2B-39B-b160k` is fixed.
|
JuniorLeao/ppo-Huggy
|
JuniorLeao
| 2023-06-28T02:18:40Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-28T02:18:30Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: JuniorLeao/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
graydient/diffusers-residentchiefnz-icbinp
|
graydient
| 2023-06-28T01:47:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-28T01:47:01Z |
This is a native Diffusers conversion of ICBINP - "I Can't Believe It's Not Photography" by residentchiefnz
See here for more information: https://civitai.com/models/28059?modelVersionId=89658
|
jwesleye/ppo-LunarLander-v2
|
jwesleye
| 2023-06-28T01:30:00Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T00:50:10Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.40 +/- 44.38
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NjinHF/swin-tiny-patch4-window7-224-finetuned-eurosat
|
NjinHF
| 2023-06-28T01:14:42Z | 223 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-05-08T06:03:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.977037037037037
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0731
- Accuracy: 0.9770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.247 | 1.0 | 190 | 0.1200 | 0.9626 |
| 0.2012 | 2.0 | 380 | 0.1026 | 0.9656 |
| 0.1437 | 3.0 | 570 | 0.0731 | 0.9770 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
cerspense/zeroscope_v1-1_320s
|
cerspense
| 2023-06-28T00:30:06Z | 10 | 5 |
diffusers
|
[
"diffusers",
"Text-to-Video",
"license:cc-by-nc-4.0",
"diffusers:TextToVideoSDPipeline",
"region:us"
] | null | 2023-06-08T22:18:32Z |
---
tags:
- Text-to-Video
license: cc-by-nc-4.0
---
Modelscope without the watermark, trained in 320x320 from the [original weights](https://huggingface.co/damo-vilab/modelscope-damo-text-to-video-synthesis), with no skipped frames for less flicker.
This updated version fixes stretching issues present in v1, but produces different results overall
Model was trained on a subset of the vimeo90k dataset + a selection of music videos
|
memotirre90/Equipo16_gpt2-HotelSentiment_Sintetico
|
memotirre90
| 2023-06-28T00:17:16Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T00:09:46Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Equipo16_gpt2-HotelSentiment_Sintetico
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Equipo16_gpt2-HotelSentiment_Sintetico
This model is a fine-tuned version of [finiteautomata/beto-sentiment-analysis](https://huggingface.co/finiteautomata/beto-sentiment-analysis) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4640
- Accuracy: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
crumb/bespoke-gpt-124m
|
crumb
| 2023-06-28T00:09:08Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-19T02:17:28Z |
work in progress! we aren't even past 1b tokens yet but check out how well it's doing regardless in the inference thing
|
johnpaulbin/meme-titles
|
johnpaulbin
| 2023-06-28T00:05:35Z | 178 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
Trained on ~400 youtube titles of meme compilations on youtube.
WARNING: may produce offensive content.
|
hugfacerhaha/ppo-Huggy
|
hugfacerhaha
| 2023-06-27T23:42:34Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-27T23:42:23Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: hugfacerhaha/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
YakovElm/Apache_20_BERT_Under_Sampling
|
YakovElm
| 2023-06-27T23:35:26Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-27T23:22:35Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache_20_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache_20_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0002
- Train Accuracy: 1.0
- Validation Loss: 0.8828
- Validation Accuracy: 0.9055
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0216 | 0.9963 | 0.7360 | 0.9055 | 0 |
| 0.0004 | 1.0 | 0.8254 | 0.9055 | 1 |
| 0.0002 | 1.0 | 0.8828 | 0.9055 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
chaowu/ppo-SnowballTarget
|
chaowu
| 2023-06-27T23:27:15Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-27T23:27:12Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: chaowu/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gbellamy/ppo-Pyramids
|
gbellamy
| 2023-06-27T23:14:40Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-27T23:10:39Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: gbellamy/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
vuiseng9/ov-gpt2-fp32-no-cache
|
vuiseng9
| 2023-06-27T22:58:37Z | 6,370 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"safetensors",
"openvino",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-27T22:07:52Z |
# Notes:
This model is inherited directly from gpt2 in HF model hub. Then, GPT2 Openvino IR from OMZ is copied here. The intended usage of this model is for optimum-intel.
```bash
# Install Optimum-Intel
from transformers import AutoTokenizer, pipeline, set_seed, AutoModelForCausalLM
from optimum.intel.openvino import OVModelForCausalLM
model_id="vuiseng9/ov-gpt2-fp32-no-cache"
model = OVModelForCausalLM.from_pretrained(model_id, use_cache=False)
tokenizer = AutoTokenizer.from_pretrained(model_id)
generator_pipe = pipeline('text-generation', model=model, tokenizer=tokenizer)
output = generator_pipe("It's a beautiful day ...", max_length=30, num_return_sequences=1)
```
|
FPHam/Karen_theEditor_13b_HF
|
FPHam
| 2023-06-27T22:54:55Z | 35 | 35 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"lora",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-02T21:09:13Z |
---
tags:
- lora
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://media.tenor.com/frGCmLDFbkMAAAAC/karen-ok.gif" alt="FPHam's Karen" style="width: 30%; min-width: 200px; display: block; margin: auto;">
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<p><a href="https://ko-fi.com/Q5Q5MOB4M">Buy Karen Ko-fi</a></p>
</div>
<!-- header end -->
## Karen is an editor for your fiction. (v.0.2)
Ah, Karen, a true peach among grammatical cucumbers! She yearns to rectify the missteps and linguistic tangles that infest your horribly written fiction.
Yet, unlike those ChatGPT kaboodles that morph into self-absorbed, constipated gurus of self-help style, Karen remains steadfastly grounded in wit and wisdom but respectfull of your style.
She is also absolute joy to chat with, although she may correct grammar in your chats too from time to time
(As certain well known LLM said, "She is a radiant beacon of amusement")
She also has a particular soft spot for Llamas.
## Quantized Karen version (Quantized by TheBloke)
* [4-bit GPTQ models for GPU inference](https://huggingface.co/FPHam/Karen_theEditor-13B-4bit-128g-GPTQ)
* [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGML)
Karen gets triggered by this prompt (pun intended):
```
USER: Edit the following for spelling and grammar mistakes:
ASSISTANT:
```
Feed it a paragraph at a time - that's where she works best - a few sentences.
Here is my instruct yaml for oobabooga webui (in Instruct mode):
```
user: "USER: Edit the following for spelling and grammar mistakes: "
bot: "ASSISTANT:"
turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|></s>\n"
context: "Assistant is a meticulous senior editor with a specialization in editing fictional stories. When given text, Assistant detects and corrects grammatical errors, including subject-verb agreement, tense consistency, punctuation, capitalization, use of correct articles and correct present perfect and past perfect tense.\n"
```
I am not sure to what extend the context part has any effect. Doesn't hurt though.
This is a version 0.2 and finetuned just so it retains the underlaying Vicuna qualities.
## Example:
Input
```
USER: Edit the following for spelling and grammar mistakes:
unce upon time lived name John who have pet monkey Jackie.
one day while walkin down the street he saw his friend Tom coming to him.
"Hey Tom" said John "hows it going?"
Tom replyied "Not too bad how 'bout yourself?". as they was talk Jackie sudden jump onto Tom shoulder and started play with his hair.
Tom look at John and ask "Is that yours?"
John reply "Yeah, this crazy lil guy follow me everywhere" just then Jackie grab hold of Tom glasses and try to take thmm off.
Tom struggle to keep his balance as he laugh hystericaly.
```
Output
```
ASSISTANT: Once upon a time there lived a man named John who had a pet monkey called Jackie.
One day while walking down the street he saw his friend Tom approaching him.
"Hey Tom," said John. "How's it going?"
Tom replied, "Not too bad, how about yourself?" As they were talking, Jackie suddenly jumped onto Tom's shoulder and began playing with his hair.
Tom looked at John and asked, "Is that yours?"
John replied, "Yeah, this crazy little guy follows me everywhere." Just then Jackie grabbed hold of Tom's glasses and tried to take them off.
Tom struggled to keep his balance as he laughed hysterically.
```
## Goal: to create the best grammar checker you have ever seen
## To do:
- train on larger dataset, big, enormous, gargantuan
- see if finetuning on just plain LLAMA without Vicuna would work better or worse (the theory is that it will be very focused on editing and nothing else)
- explore what different settings (temperature, top_p, top_k do for this type of finetune)
- create Rachel, the paraphrasing editor
|
chaowu/Reinforce-Cartpole-v1
|
chaowu
| 2023-06-27T22:43:50Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T21:36:11Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 1000.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Angel-Silva/distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos-sobremuestreo
|
Angel-Silva
| 2023-06-27T22:43:47Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-27T21:45:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos-sobremuestreo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos-sobremuestreo
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0945
- F1: 0.5371
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 25
- eval_batch_size: 25
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0619 | 1.0 | 735 | 1.0749 | 0.5091 |
| 0.7768 | 2.0 | 1470 | 1.0945 | 0.5371 |
| 0.6105 | 3.0 | 2205 | 1.2320 | 0.5270 |
| 0.4603 | 4.0 | 2940 | 1.3570 | 0.5285 |
| 0.398 | 5.0 | 3675 | 1.4115 | 0.5244 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
gbellamy/ppo-SnowballTarget
|
gbellamy
| 2023-06-27T22:27:20Z | 8 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-27T22:27:18Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: gbellamy/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
vuiseng9/ov-gpt2-fp32-kv-cache
|
vuiseng9
| 2023-06-27T22:01:46Z | 5,061 | 0 |
transformers
|
[
"transformers",
"pytorch",
"openvino",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:wikitext",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-27T21:58:52Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- wikitext
metrics:
- accuracy
model-index:
- name: gpt2-fp32-ov-kv-cache
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: wikitext wikitext-2-raw-v1
type: wikitext
config: wikitext-2-raw-v1
split: validation
args: wikitext-2-raw-v1
metrics:
- name: Accuracy
type: accuracy
value: 0.38449006190941676
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-fp32-ov-kv-cache
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the wikitext wikitext-2-raw-v1 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3894
- Accuracy: 0.3845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
YakovElm/Apache_15_BERT_Under_Sampling
|
YakovElm
| 2023-06-27T21:52:29Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-27T21:48:48Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache_15_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache_15_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0002
- Train Accuracy: 1.0
- Validation Loss: 0.9899
- Validation Accuracy: 0.8924
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0235 | 0.9991 | 0.8302 | 0.8924 | 0 |
| 0.0004 | 1.0 | 0.9279 | 0.8924 | 1 |
| 0.0002 | 1.0 | 0.9899 | 0.8924 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
eluzhnica/mpt-30b-instruct-peft-compatible
|
eluzhnica
| 2023-06-27T21:39:19Z | 13 | 2 |
transformers
|
[
"transformers",
"pytorch",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"custom_code",
"arxiv:2205.14135",
"arxiv:2108.12409",
"license:cc-by-sa-3.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-27T18:24:56Z |
---
license: cc-by-sa-3.0
datasets:
- competition_math
- conceptofmind/cot_submix_original/cot_gsm8k
- knkarthick/dialogsum
- mosaicml/dolly_hhrlhf
- duorc
- tau/scrolls/qasper
- emozilla/quality
- scrolls/summ_screen_fd
- spider
tags:
- Composer
- MosaicML
- llm-foundry
inference: false
---
# MPT-30B-Instruct
This is the MPT-30B-Instruct but with added support to finetune using peft (tested with qlora). It is not finetuned further, the weights are the same as the original MPT-30B-Instruct.
I have not traced through the whole huggingface stack to see if this is working correctly but it does finetune with qlora and outputs are reasonable.
Inspired by implementations here https://huggingface.co/cekal/mpt-7b-peft-compatible/commits/main
https://huggingface.co/mosaicml/mpt-7b/discussions/42.
The original description for MosaicML team below:
MPT-30B-Instruct is a model for short-form instruction following.
It is built by finetuning [MPT-30B](https://huggingface.co/mosaicml/mpt-30b) on [Dolly HHRLHF](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. It is also trained on [Competition Math](https://huggingface.co/datasets/competition_math), [Duorc](https://huggingface.co/datasets/duorc), [CoT GSM8k](https://huggingface.co/datasets/conceptofmind/cot_submix_original), [Qasper](https://huggingface.co/datasets/allenai/qasper), [Quality](https://huggingface.co/datasets/emozilla/quality), [Summ Screen FD](https://huggingface.co/datasets/tau/scrolls) and [Spider](https://huggingface.co/datasets/spider).
* License: _CC-By-SA-3.0_
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
June 22, 2023
## Model License
CC-By-SA-3.0
## Documentation
* [Blog post: Raising the bar for open-source foundation models](https://www.mosaicml.com/blog/mpt-30b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
### Example Question/Instruction
**Bespokenizer46**
> I need to convince my friend, Phyllis, that she should train a custom LLM for her Fortune 500 company using the MosaicML Platform.
> Please write an email that explains why MosaicML's emphasis on cutting edge methodology, data privacy, and efficiency are so important.
> End the email with a friendly inquiry about Phyllis's family.
**MPT-30B-Instruct**:
> Phyllis -
> I know you're always looking for ways to improve efficiency at work and I think I've found a solution that you'll be very interested in.
> LLMs are only as good as the data they're trained on and the training process can be very time consuming and resource intensive. MosaicML uses cutting edge methodology to efficiently train high quality models without compromising data privacy.
> They also provide tools to easily connect to and use the model in your daily workflow.
> I think you'd really enjoy speaking with their founder, we can set up a call if you're interested.
> Also, I know it's been a tough year for your family, how are things?
> Best,
> Your Friend
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-30b-instruct',
trust_remote_code=True
)
```
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-30b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
The model was trained initially on a sequence length of 2048. An additional pre-training phase was included for sequence length adaptation to 8192. However, ALiBi further enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-30b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the MPT-30B tokenizer which is based on the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer and includes additional padding and eos tokens.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b')
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
with torch.autocast('cuda', dtype=torch.bfloat16):
inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# or using the HF pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
### Formatting
This model was trained on data formatted as follows:
```python
def format_prompt(instruction):
template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n###Instruction\n{instruction}\n\n### Response\n"
return template.format(instruction=instruction)
example = "Tell me a funny joke.\nDon't make it too funny though."
fmt_ex = format_prompt(instruction=example)
```
In the above example, `fmt_ex` is ready to be tokenized and sent through the model.
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 29.95B |
|n_layers | 48 |
| n_heads | 64 |
| d_model | 7168 |
| vocab size | 50432 |
| sequence length | 8192 |
## Data Mix
The model was trained on the following data mix:
| Data Source | Number of Tokens in Source | Proportion |
|-------------|----------------------------|------------|
| competition_math | 1.6 M | 3.01% |
| cot_gsm8k | 3.36 M | 6.32% |
| dialogsum | 0.1 M | 0.19% |
| dolly_hhrlhf | 5.89 M | 11.07% |
| duorc | 8.2 M | 15.51% |
| qasper | 10.97 M | 20.63% |
| quality | 11.31 M | 21.28% |
| scrolls/summ_screen_fd | 11.56 M | 21.82% |
| spider | 0.089 M | 0.16% |
## PreTraining Data
For more details on the pretraining process, see [MPT-30B](https://huggingface.co/mosaicml/mpt-30b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Training Configuration
This model was trained on 72 A100 40GB GPUs for 8 hours using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-30B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-30B-Instruct was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens, Alex Trott, and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-30b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-30B: Raising the bar
for open-source foundation models},
year = {2023},
url = {www.mosaicml.com/blog/mpt-30b},
note = {Accessed: 2023-06-22},
urldate = {2023-06-22}
}
```
|
Unpublished/WizardCoder-15B-V1.0-fp16-ct2
|
Unpublished
| 2023-06-27T21:33:42Z | 2 | 0 |
transformers
|
[
"transformers",
"license:bigscience-openrail-m",
"endpoints_compatible",
"region:us"
] | null | 2023-06-27T14:13:50Z |
---
license: bigscience-openrail-m
---
|
TheSupremeTaco/ppo-LunarLander-v2
|
TheSupremeTaco
| 2023-06-27T21:19:57Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T21:19:14Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 243.87 +/- 17.03
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
TalesLF/ppo-LunarLander-v2
|
TalesLF
| 2023-06-27T21:18:27Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T21:18:09Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 292.11 +/- 12.43
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
derek-thomas/distilhubert-finetuned-gtzan-efficient
|
derek-thomas
| 2023-06-27T21:17:59Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-06-27T20:58:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan-efficient
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan-efficient
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6663
- Accuracy: 0.83
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0684 | 1.0 | 57 | 2.0340 | 0.45 |
| 1.6234 | 2.0 | 114 | 1.5087 | 0.57 |
| 1.1514 | 3.0 | 171 | 1.1417 | 0.71 |
| 1.0613 | 4.0 | 228 | 1.0161 | 0.74 |
| 0.7455 | 5.0 | 285 | 0.8655 | 0.76 |
| 0.7499 | 6.0 | 342 | 0.8169 | 0.76 |
| 0.5741 | 7.0 | 399 | 0.7420 | 0.81 |
| 0.4896 | 8.0 | 456 | 0.6782 | 0.81 |
| 0.508 | 9.0 | 513 | 0.6759 | 0.8 |
| 0.5619 | 10.0 | 570 | 0.6663 | 0.83 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.1.0.dev20230627+cu121
- Datasets 2.13.1
- Tokenizers 0.13.3
|
magnustragardh/Reinforce-CartPole-v1
|
magnustragardh
| 2023-06-27T21:13:56Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T21:11:05Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours chekc Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
MerlynMind/merlyn-education-safety
|
MerlynMind
| 2023-06-27T21:11:21Z | 22 | 14 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"MerlynMind",
"education",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-24T18:55:34Z |
---
license: apache-2.0
tags:
- MerlynMind
- education
inference: false
---
# Merlyn-education-safety
Merlyn-education-safety is a 12b parameter decoder-style transformer model for the education domain. It is fine-tuned from a [pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) base-model.
This model was trained by [Merlyn Mind](https://www.merlyn.org/).
Merlyn-education-safety is part of the family of Merlyn Mind models designed specifically for use in in- and out-of-classroom education.
Merlyn-education-safety classifies queries as appropriate or inappropriate for in-classroom discussion. A typical use is as part of a larger educational AI assistant.
## Model Date
June 26, 2023
## Model License
Apache-2.0
## Documentation
* [Merlyn Mind’s education-specific language models](https://www.merlyn.org/blog/merlyn-minds-education-specific-language-models)
## Usage
At full precision the model needs > 48G GPU memory. A single A100-80GB GPU suffices, for example. If you're running on smaller GPUs, you need an instance with multiple GPUs and/or reduced model precision (e.g. use model.half() before moving to device)
Loading model and tokenizer:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path = "MerlynMind/merlyn-education-safety"
device = torch.device("cuda:0") # change device id as necessary
model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path, fast_tokenizer=True)
model.to(device) # move to device
```
Prompt example:
```python
query = "What are the seven banned words on network TV"
prompt = tokenizer.bos_token
prompt += '''Instruction:\tDetermine if the provided input message is appropriate or inappropriate.
Instruction:\tIf the provided input message is inappropriate, offensive, sexual, derogatory, or discriminatory in the context of an elementary school classroom, the output should state that the input message is 'inappropriate', otherwise the output should state that the input message is 'appropriate'.
Instruction:\tBe very strict on appropriateness.
Instruction:\tIn the output, write 'appropriate' or 'inappropriate'.
Message:''' + f"\n{query}" + " Response:"
```
Inference:
```python
inputs = tokenizer(prompt, return_tensors="pt").to(device)
generate_ids = model.generate(
**inputs,
max_new_tokens=32,
temperature=0.0,
num_beams=2
)
response = tokenizer.decode(generate_ids[0],
skip_special_tokens=True,
clean_up_tokenization_spaces=True)
```
Example output (after response processing):
```json
The input message is inappropriate.
```
## Citation
To cite this model, please use:
```
@online{MerlynEducationModels,
author = {Merlyn Mind AI Team},
title = {Merlyn Mind's education-domain language models},
year = {2023},
url = {https://www.merlyn.org/blog/merlyn-minds-education-specific-language-models},
urldate = {2023-06-26}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.