modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-26 18:27:43
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-26 18:26:40
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
fx1H/Reinforce_Agent_Playing-CartPole-v1
|
fx1H
| 2023-06-22T18:32:33Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T18:32:26Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce_Agent_Playing-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 193.10 +/- 21.13
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
aravind168/LunarLander-v2
|
aravind168
| 2023-06-22T18:32:19Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T18:31:58Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.77 +/- 21.38
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sxandie/NER2.0.1-dataset
|
sxandie
| 2023-06-22T18:32:07Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-22T17:19:10Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: sxandie/NER2.0.1-dataset
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# sxandie/NER2.0.1-dataset
This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0892
- Validation Loss: 0.1364
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 35640, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.2725 | 0.1879 | 0 |
| 0.1604 | 0.1563 | 1 |
| 0.1234 | 0.1496 | 2 |
| 0.1015 | 0.1404 | 3 |
| 0.0892 | 0.1364 | 4 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.2.2
- Tokenizers 0.13.3
|
Niftynr/FAlcon_7b_working_FAQ
|
Niftynr
| 2023-06-22T18:27:15Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T18:27:09Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
yanaayanaayanaa/khanzalora
|
yanaayanaayanaa
| 2023-06-22T18:24:21Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T18:17:32Z |
---
license: creativeml-openrail-m
---
|
Hinova/poca-SoccerTwos-14M
|
Hinova
| 2023-06-22T18:16:55Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-06-22T18:16:13Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Hinova/poca-SoccerTwos-14M
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
yanaayanaayanaa/febrianilora
|
yanaayanaayanaa
| 2023-06-22T18:14:39Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T17:45:51Z |
---
license: creativeml-openrail-m
---
|
houssamb48/SynthoMindAI
|
houssamb48
| 2023-06-22T18:05:27Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T18:05:27Z |
---
license: creativeml-openrail-m
---
|
rogelioplatt/roberta-base-bne-finetuned-Tass2020
|
rogelioplatt
| 2023-06-22T18:03:15Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-22T18:01:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: roberta-base-bne-finetuned-Tass2020
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-Tass2020
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9512 | 1.0 | 15 | 3.4947 |
| 3.37 | 2.0 | 30 | 2.9933 |
| 3.1298 | 3.0 | 45 | 3.1546 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
hannahh7/lunar_lander_v2_scratch
|
hannahh7
| 2023-06-22T17:59:43Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T17:59:37Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -171.36 +/- 86.00
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'hannahh7/lunar_lander_v2_scratch'
'batch_size': 512
'minibatch_size': 128}
```
|
Blackroot/airoboros-7B-gpt4-1.4-half-wanda
|
Blackroot
| 2023-06-22T17:56:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-22T17:24:35Z |
2:4 Pruned wanda, wikitext perplexity evaluates to about ~11.4 against the base model's ~6.2
|
zslrmhb/SpaceInvadersNoFrameskip-v4
|
zslrmhb
| 2023-06-22T17:48:31Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T16:30:08Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 703.00 +/- 168.87
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga zslrmhb -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga zslrmhb -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga zslrmhb
```
## Hyperparameters
```python
OrderedDict([('batch_size', 16),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Lilpopit/privet
|
Lilpopit
| 2023-06-22T17:42:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-22T17:35:31Z |
import requests
API_URL = "https://api-inference.huggingface.co/models/Falon/ayaka-db"
headers = {"Authorization": f"Bearer {API_TOKEN}"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.content
image_bytes = query({
"inputs": "Astronaut riding a horse",
})
# You can access the image with PIL.Image for example
import io
from PIL import Image
image = Image.open(io.BytesIO(image_bytes))
|
mariololo/ppo-Huggy
|
mariololo
| 2023-06-22T17:39:32Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-22T17:39:24Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mariololo/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Yireonzz/mshadows
|
Yireonzz
| 2023-06-22T17:12:39Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T17:07:19Z |
---
license: creativeml-openrail-m
---
|
mmendoza/gpt-j-6B-lora-polite-enh
|
mmendoza
| 2023-06-22T16:57:34Z | 4 | 1 |
peft
|
[
"peft",
"politeness",
"text2text-generation",
"en",
"license:apache-2.0",
"region:us"
] |
text2text-generation
| 2023-06-20T17:11:46Z |
---
license: apache-2.0
language:
- en
library_name: peft
tags:
- politeness
pipeline_tag: text2text-generation
---
# Politeness Generative Model
## Overview
This GPT-based model is a text2text generator that writes a polite version of an input sentence. It is based on gpt-j-6B and was aligned using 29,000 pairs of sentences.
## Prompt
You have an input text. Write a polite version of the text preserving the meaning of the input.
Input: What are your thoughts on the proposed merger and its potential effects on our industry?
Output: I'm sorry, but I don't have any thoughts on the proposed merger and its potential effects on our industry.
## Quick tutorial
```python
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "mmendoza/gpt-j-6B-lora-polite-enh"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
```
# Load the Politeness Model
```python
model = PeftModel.from_pretrained(model, peft_model_id)
```
# Prompting
```python
batch = tokenizer("You have an input text. Write a polite version of the text preserving the meaning of the input.
Input: No card counting allowed in blackjack at the casino. Output: ", return_tensors='pt')
with torch.cuda.amp.autocast():
output_tokens = model.generate(**batch, max_new_tokens=50, pad_token_id=tokenizer.eos_token_id)
line = tokenizer.decode(output_tokens[0], skip_special_tokens=True)
start = 'Output: '
end = '.'
line = line.replace("\n"," ")
line = (line.split(start))[1].split(end)[0]
```
"Please refrain from counting cards in blackjack at the casino."
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
pavanBuduguppa/asr_inverse_text_normalization
|
pavanBuduguppa
| 2023-06-22T16:51:33Z | 132 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"code",
"asr",
"inverse text normalization",
"en",
"dataset:pavanBuduguppa/asr_inverse_text_normalization",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-02-22T13:05:54Z |
---
license: gpl-3.0
language:
- en
pipeline_tag: text2text-generation
tags:
- code
- asr
- inverse text normalization
datasets:
- pavanBuduguppa/asr_inverse_text_normalization
---
---
---
# asr_inverse_text_normalization
Finetuned a facebook/bart-base Pretrained model on the ASR inverse text normalization dataset by treating it as a seq2seq task. Other approaches which may be considered is by considering it as a TokenClassification task and the one mentioned here https://machinelearning.apple.com/research/inverse-text-normal.
## Model description
BART (Bidirectional and Auto-Regressive Transformers) is a pre-trained transformer-based neural network model developed by Facebook AI Research (FAIR) for various natural language processing (NLP) tasks
The BART architecture is based on the Transformer model, which is a type of neural network architecture that processes sequential input data, such as text, by applying self-attention mechanisms to capture the relationships between different words in the input sequence.
BART includes both auto-regressive and bidirectional encoder-decoder transformer architectures, which enable it to perform both generation and prediction tasks
BART was trained on a diverse range of NLP tasks, including machine translation, summarization, and question answering, and has shown strong performance across multiple benchmarks.
Its training process involves corrupting text with different types of noise and training the model to reconstruct the original text, which has been shown to improve the model's ability to generalize to new tasks and outperform other pre-trained language models like GPT and BERT
The model flavour which was chosen is that of "facebook/bart-base" and columns "after" is used as the source while "before" column is used as the targets.
## Intended uses & limitations
This model can be used as an out-of-the-box solution to the invesrse text normalization which can convert ASR generated un-normalized text such as
"my c v v for my card is five six seven and it expires on november twenty three" -> "my CVV for my card is 567 and it expires on November 23"
The model needs to be explored for various min and max length setting at the time of generation for your specific usecase
### How to use
```python
>>> from transformers import pipeline
>>> generator = pipeline(model="pavanBuduguppa/asr_inverse_text_normalization")
>>> generator("my c v v for my card is five six seven and it expires on november twenty three")
```
## Training data
All credits and rights for the training data belongs to Google. The data was merely obtained and processed for this model and the original data can be found here https://www.kaggle.com/competitions/text-normalization-challenge-english-language/data
|
HxLab/ppo-LunarLander-v2
|
HxLab
| 2023-06-22T16:44:41Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T16:44:05Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.13 +/- 14.81
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
VMware/electra-small-mrqa
|
VMware
| 2023-06-22T16:36:12Z | 251 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"electra",
"question-answering",
"en",
"dataset:mrqa",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-02-17T21:28:48Z |
---
license: apache-2.0
datasets:
- mrqa
language:
- en
metrics:
- exact_match
- f1
model-index:
- name: VMware/electra-small-mrqa
results:
- task:
type: Question-Answering
dataset:
type: mrqa
name: MRQA
metrics:
- type: exact_match
value: 57.63
name: Eval EM
- type: f1
value: 69.38
name: Eval F1
- type: exact_match
value: 38.68
name: Test EM
- type: f1
value: 51.56
name: Test F1
---
This model release is part of a joint research project with Howard University's Innovation Foundry/AIM-AHEAD Lab.
# Model Details
- **Model name:** ELECTRA-Small-MRQA
- **Model type:** Extractive Question Answering
- **Parent Model:** [ELECTRA-Small-Discriminator](https://huggingface.co/google/electra-small-discriminator)
- **Training dataset:** [MRQA](https://huggingface.co/datasets/mrqa) (Machine Reading for Question Answering)
- **Training data size:** 516,819 examples
- **Training time:** 2:16:36 on 1 Nvidia V100 32GB GPU
- **Language:** English
- **Framework:** PyTorch
- **Model version:** 1.0
# Intended Use
This model is intended to provide accurate answers to questions based on context passages. It can be used for a variety of tasks, including question-answering for search engines, chatbots, customer service systems, and other applications that require natural language understanding.
# How to Use
```python
from transformers import pipeline
question_answerer = pipeline("question-answering", model='VMware/electra-small-mrqa')
context = "We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems. In this task, we adapted and unified 18 distinct question answering datasets into the same format. Among them, six datasets were made available for training, six datasets were made available for development, and the final six were hidden for final evaluation. Ten teams submitted systems, which explored various ideas including data sampling, multi-task learning, adversarial training and ensembling. The best system achieved an average F1 score of 72.5 on the 12 held-out datasets, 10.7 absolute points higher than our initial baseline based on BERT."
question = "What is MRQA?"
result = question_answerer(question=question, context=context)
print(result)
# {
# 'score': 0.3399854898452759,
# 'start': 30,
# 'end': 68,
# 'answer': 'Machine Reading for Question Answering'
# }
```
# Training Details
The model was trained for 1 epoch on the MRQA training set.
## Training Hyperparameters
```python
args = TrainingArguments(
"electra-small-mrqa",
save_strategy="epoch",
learning_rate=1e-5,
num_train_epochs=1,
weight_decay=0.01,
per_device_train_batch_size=16,
)
```
# Evaluation Metrics
The model was evaluated using standard metrics for question-answering models, including:
Exact match (EM): The percentage of questions for which the model produces an exact match with the ground truth answer.
F1 score: A weighted average of precision and recall, which measures the overlap between the predicted answer and the ground truth answer.
# Model Family Performance
| Parent Language Model | Number of Parameters | Training Time | Eval Time | Test Time | Eval EM | Eval F1 | Test EM | Test F1 |
|---|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| BERT-Tiny | 4,369,666 | 26:11 | 0:41 | 0:04 | 22.78 | 32.42 | 10.18 | 18.72 |
| BERT-Base | 108,893,186 | 8:39:10 | 18:42 | 2:13 | 64.48 | 76.14 | 48.89 | 59.89 |
| BERT-Large | 334,094,338 | 28:35:38 | 1:00:56 | 7:14 | 69.52 | 80.50 | 55.00 | 65.78 |
| DeBERTa-v3-Extra-Small | 70,682,882 | 5:19:05 | 11:29 | 1:16 | 65.58 | 77.17 | 50.92 | 62.58 |
| DeBERTa-v3-Base | 183,833,090 | 12:13:41 | 28:18 | 3:09 | 71.43 | 82.59 | 59.49 | 70.46 |
| DeBERTa-v3-Large | 434,014,210 | 38:36:13 | 1:25:47 | 9:33 | **76.08** | **86.23** | **64.27** | **75.22** |
| ELECTRA-Small | 13,483,522 | 2:16:36 | 3:55 | 0:27 | 57.63 | 69.38 | 38.68 | 51.56 |
| ELECTRA-Base | 108,893,186 | 8:40:57 | 18:41 | 2:12 | 68.78 | 80.16 | 54.70 | 65.80 |
| ELECTRA-Large | 334,094,338 | 28:31:59 | 1:00:40 | 7:13 | 74.15 | 84.96 | 62.35 | 73.28 |
| MiniLMv2-L6-H384-from-BERT-Large | 22,566,146 | 2:12:48 | 4:23 | 0:40 | 59.31 | 71.09 | 41.78 | 53.30 |
| MiniLMv2-L6-H768-from-BERT-Large | 66,365,954 | 4:42:59 | 10:01 | 1:10 | 64.27 | 75.84 | 49.05 | 59.82 |
| MiniLMv2-L6-H384-from-RoBERTa-Large | 30,147,842 | 2:15:10 | 4:19 | 0:30 | 59.27 | 70.64 | 42.95 | 54.03 |
| MiniLMv2-L12-H384-from-RoBERTa-Large | 40,794,626 | 4:14:22 | 8:27 | 0:58 | 64.58 | 76.23 | 51.28 | 62.83 |
| MiniLMv2-L6-H768-from-RoBERTa-Large | 81,529,346 | 4:39:02 | 9:34 | 1:06 | 65.80 | 77.17 | 51.72 | 63.27 |
| TinyRoBERTa | 81,529.346 | 4:27:06\* | 9:54 | 1:04 | 69.38 | 80.07 | 53.29 | 64.16 |
| RoBERTa-Base | 124,056,578 | 8:50:29 | 18:59 | 2:11 | 69.06 | 80.08 | 55.53 | 66.49 |
| RoBERTa-Large | 354,312,194 | 29:16:06 | 1:01:10 | 7:04 | 74.08 | 84.38 | 62.20 | 72.88 |
\* TinyRoBERTa's training time isn't directly comparable to the other models since it was distilled from [VMware/roberta-large-mrqa](https://huggingface.co/VMware/roberta-large-mrqa) that was already trained on MRQA.
# Limitations and Bias
The model is based on a large and diverse dataset, but it may still have limitations and biases in certain areas. Some limitations include:
- Language: The model is designed to work with English text only and may not perform as well on other languages.
- Domain-specific knowledge: The model has been trained on a general dataset and may not perform well on questions that require domain-specific knowledge.
- Out-of-distribution questions: The model may struggle with questions that are outside the scope of the MRQA dataset. This is best demonstrated by the delta between its scores on the eval vs test datasets.
In addition, the model may have some bias in terms of the data it was trained on. The dataset includes questions from a variety of sources, but it may not be representative of all populations or perspectives. As a result, the model may perform better or worse for certain types of questions or on certain types of texts.
|
VMware/bert-large-mrqa
|
VMware
| 2023-06-22T16:36:05Z | 173 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:mrqa",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-02-17T20:46:18Z |
---
license: apache-2.0
datasets:
- mrqa
language:
- en
metrics:
- exact_match
- f1
model-index:
- name: VMware/bert-large-mrqa
results:
- task:
type: Question-Answering
dataset:
type: mrqa
name: MRQA
metrics:
- type: exact_match
value: 69.52
name: Eval EM
- type: f1
value: 80.50
name: Eval F1
- type: exact_match
value: 55.00
name: Test EM
- type: f1
value: 65.78
name: Test F1
---
This model release is part of a joint research project with Howard University's Innovation Foundry/AIM-AHEAD Lab.
# Model Details
- **Model name:** BERT-Large-MRQA
- **Model type:** Extractive Question Answering
- **Parent Model:** [BERT-Large-uncased](https://huggingface.co/bert-large-uncased)
- **Training dataset:** [MRQA](https://huggingface.co/datasets/mrqa) (Machine Reading for Question Answering)
- **Training data size:** 516,819 examples
- **Training time:** 28:35:38 on 1 Nvidia V100 32GB GPU
- **Language:** English
- **Framework:** PyTorch
- **Model version:** 1.0
# Intended Use
This model is intended to provide accurate answers to questions based on context passages. It can be used for a variety of tasks, including question-answering for search engines, chatbots, customer service systems, and other applications that require natural language understanding.
# How to Use
```python
from transformers import pipeline
question_answerer = pipeline("question-answering", model='VMware/bert-large-mrqa')
context = "We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems. In this task, we adapted and unified 18 distinct question answering datasets into the same format. Among them, six datasets were made available for training, six datasets were made available for development, and the final six were hidden for final evaluation. Ten teams submitted systems, which explored various ideas including data sampling, multi-task learning, adversarial training and ensembling. The best system achieved an average F1 score of 72.5 on the 12 held-out datasets, 10.7 absolute points higher than our initial baseline based on BERT."
question = "What is MRQA?"
result = question_answerer(question=question, context=context)
print(result)
# {
# 'score': 0.864973783493042,
# 'start': 30,
# 'end': 68,
# 'answer': 'Machine Reading for Question Answering'
# }
```
# Training Details
The model was trained for 1 epoch on the MRQA training set.
## Training Hyperparameters
```python
args = TrainingArguments(
"bert-large-mrqa",
save_strategy="epoch",
learning_rate=1e-5,
num_train_epochs=1,
weight_decay=0.01,
per_device_train_batch_size=8,
)
```
# Evaluation Metrics
The model was evaluated using standard metrics for question-answering models, including:
Exact match (EM): The percentage of questions for which the model produces an exact match with the ground truth answer.
F1 score: A weighted average of precision and recall, which measures the overlap between the predicted answer and the ground truth answer.
# Model Family Performance
| Parent Language Model | Number of Parameters | Training Time | Eval Time | Test Time | Eval EM | Eval F1 | Test EM | Test F1 |
|---|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| BERT-Tiny | 4,369,666 | 26:11 | 0:41 | 0:04 | 22.78 | 32.42 | 10.18 | 18.72 |
| BERT-Base | 108,893,186 | 8:39:10 | 18:42 | 2:13 | 64.48 | 76.14 | 48.89 | 59.89 |
| BERT-Large | 334,094,338 | 28:35:38 | 1:00:56 | 7:14 | 69.52 | 80.50 | 55.00 | 65.78 |
| DeBERTa-v3-Extra-Small | 70,682,882 | 5:19:05 | 11:29 | 1:16 | 65.58 | 77.17 | 50.92 | 62.58 |
| DeBERTa-v3-Base | 183,833,090 | 12:13:41 | 28:18 | 3:09 | 71.43 | 82.59 | 59.49 | 70.46 |
| DeBERTa-v3-Large | 434,014,210 | 38:36:13 | 1:25:47 | 9:33 | **76.08** | **86.23** | **64.27** | **75.22** |
| ELECTRA-Small | 13,483,522 | 2:16:36 | 3:55 | 0:27 | 57.63 | 69.38 | 38.68 | 51.56 |
| ELECTRA-Base | 108,893,186 | 8:40:57 | 18:41 | 2:12 | 68.78 | 80.16 | 54.70 | 65.80 |
| ELECTRA-Large | 334,094,338 | 28:31:59 | 1:00:40 | 7:13 | 74.15 | 84.96 | 62.35 | 73.28 |
| MiniLMv2-L6-H384-from-BERT-Large | 22,566,146 | 2:12:48 | 4:23 | 0:40 | 59.31 | 71.09 | 41.78 | 53.30 |
| MiniLMv2-L6-H768-from-BERT-Large | 66,365,954 | 4:42:59 | 10:01 | 1:10 | 64.27 | 75.84 | 49.05 | 59.82 |
| MiniLMv2-L6-H384-from-RoBERTa-Large | 30,147,842 | 2:15:10 | 4:19 | 0:30 | 59.27 | 70.64 | 42.95 | 54.03 |
| MiniLMv2-L12-H384-from-RoBERTa-Large | 40,794,626 | 4:14:22 | 8:27 | 0:58 | 64.58 | 76.23 | 51.28 | 62.83 |
| MiniLMv2-L6-H768-from-RoBERTa-Large | 81,529,346 | 4:39:02 | 9:34 | 1:06 | 65.80 | 77.17 | 51.72 | 63.27 |
|TinyRoBERTa | 81,529.346 | 4:27:06 *| 9:54 | 1:04 | 69.38 | 80.07| 53.29| 64.16|
| RoBERTa-Base | 124,056,578 | 8:50:29 | 18:59 | 2:11 | 69.06 | 80.08 | 55.53 | 66.49 |
| RoBERTa-Large | 354,312,194 | 29:16:06 | 1:01:10 | 7:04 | 74.08 | 84.38 | 62.20 | 72.88 |
\* TinyRoBERTa's training time isn't directly comparable to the other models since it was distilled from [VMware/roberta-large-mrqa](https://huggingface.co/VMware/roberta-large-mrqa) that was already trained on MRQA.
# Limitations and Bias
The model is based on a large and diverse dataset, but it may still have limitations and biases in certain areas. Some limitations include:
- Language: The model is designed to work with English text only and may not perform as well on other languages.
- Domain-specific knowledge: The model has been trained on a general dataset and may not perform well on questions that require domain-specific knowledge.
- Out-of-distribution questions: The model may struggle with questions that are outside the scope of the MRQA dataset. This is best demonstrated by the delta between its scores on the eval vs test datasets.
In addition, the model may have some bias in terms of the data it was trained on. The dataset includes questions from a variety of sources, but it may not be representative of all populations or perspectives. As a result, the model may perform better or worse for certain types of questions or on certain types of texts.
|
aminramezani345/finetuning-sentiment-model-3000-samples
|
aminramezani345
| 2023-06-22T16:11:54Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-05T15:28:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.8786885245901639
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3040
- Accuracy: 0.8767
- F1: 0.8787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Niftynr/falcon-7b-e_100
|
Niftynr
| 2023-06-22T16:10:52Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T16:10:50Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
brunoleme/my_awesome_eli5_clm-model
|
brunoleme
| 2023-06-22T16:02:27Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T15:00:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8709 | 1.0 | 1113 | 3.7946 |
| 3.7741 | 2.0 | 2226 | 3.7780 |
| 3.7275 | 3.0 | 3339 | 3.7753 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Akazi/resnet_c_s_redwood_finetuned
|
Akazi
| 2023-06-22T15:42:14Z | 13 | 0 |
transformers
|
[
"transformers",
"image-classification",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-21T19:36:23Z |
---
license: mit
pipeline_tag: image-classification
---
# Finetuned ResNet Model for Park Image Classification
## Overview
This model is a finetuned ResNet model that has been trained on images from a local park in Pleasanton.
## Dataset
The training dataset consists of two classes: "Sierra Redwood" and "Coastal Redwood". The dataset contains images captured within the park in Pleasanton.
## Model Architecture
The model architecture used for this classification task is ResNet-50.
## Usage
To use this model, you can follow these steps:
1. Install the required dependencies, including PyTorch and torchvision.
2. Download the model file "resnet_park_redwood_finetuned.tar.gz" from the provided link.
3. Load the model using the `torch.load` function and extract the model weights.
4. Prepare your input image by resizing it to 224x224 pixels and applying the necessary transformations (e.g., normalization).
5. Pass the preprocessed image through the model to obtain the predicted class probabilities.
6. Optionally, apply softmax to the predicted probabilities to obtain normalized scores.
7. The model will output the predicted label (Sierra Redwood or Coastal Redwood) along with the corresponding probability scores.
## Example Code
Here's an example code snippet for using the finetuned ResNet model:
```python
from transformers import AutoModelForImageClassification, AutoImagePipeline
model_name = "resnet_c_s_redwood_finetuned"
image_path = "path/to/your/image.jpg"
# Load the pre-trained model
model = AutoModelForImageClassification.from_pretrained(model_name)
# Create an image classification pipeline
classifier = AutoImagePipeline(model=model, model_name=model_name)
# Perform image classification
result = classifier(image_path)
# Print the predicted label
predicted_label = result[0]['label']
print(predicted_label)
```
|
thenewcompany/q-FrozenLake-v1-4x4-noSlippery
|
thenewcompany
| 2023-06-22T15:29:44Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-10T03:41:32Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="thenewcompany/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Mtc2/q-FrozenLake-v1-4x4-noSlippery
|
Mtc2
| 2023-06-22T15:29:24Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T15:29:22Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Mtc2/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
swl-models/MsceneMix-v1.1
|
swl-models
| 2023-06-22T15:28:11Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T15:19:40Z |
---
license: creativeml-openrail-m
---
|
S3S3/ppo-Huggy
|
S3S3
| 2023-06-22T15:27:56Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-22T15:27:43Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: S3S3/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ufal/byt5-small-multilexnorm2021-hr
|
ufal
| 2023-06-22T15:19:05Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"lexical normalization",
"hr",
"dataset:mc4",
"dataset:wikipedia",
"dataset:multilexnorm",
"arxiv:2105.13626",
"arxiv:1907.06292",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: hr
datasets:
- mc4
- wikipedia
- multilexnorm
tags:
- lexical normalization
license: apache-2.0
---
# Fine-tuned ByT5-small for MultiLexNorm (Croatian version)

This is the official release of the fine-tuned models for **the winning entry** to the [*W-NUT 2021: Multilingual Lexical Normalization (MultiLexNorm)* shared task](https://noisy-text.github.io/2021/multi-lexnorm.html), which evaluates lexical-normalization systems on 12 social media datasets in 11 languages.
Our system is based on [ByT5](https://arxiv.org/abs/2105.13626), which we first pre-train on synthetic data and then fine-tune on authentic normalization data. It achieves the best performance by a wide margin in intrinsic evaluation, and also the best performance in extrinsic evaluation through dependency parsing. In addition to these fine-tuned models, we also release the source files on [GitHub](https://github.com/ufal/multilexnorm2021) and an interactive demo on [Google Colab](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing).
## How to use
The model was *not* fine-tuned in a standard sentence-to-sentence setting – instead, it was tailored to the token-to-token definition of MultiLexNorm data. Please refer to [**the interactive demo on Colab notebook**](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing) to learn how to use these models.
## How to cite
```bibtex
@inproceedings{wnut-ufal,
title= "{ÚFAL} at {MultiLexNorm} 2021: Improving Multilingual Lexical Normalization by Fine-tuning {ByT5}",
author = "Samuel, David and Straka, Milan",
booktitle = "Proceedings of the 7th Workshop on Noisy User-generated Text (W-NUT 2021)",
year = "2021",
publisher = "Association for Computational Linguistics",
address = "Punta Cana, Dominican Republic"
}
```
## ByT5 - Small
ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-small).
ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-small` significantly outperforms [mt5-small](https://huggingface.co/google/mt5-small) on [TweetQA](https://arxiv.org/abs/1907.06292).
Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626)
Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*
|
maximxls/text-normalization-ru-terrible
|
maximxls
| 2023-06-22T15:06:15Z | 119 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"text-generation-inference",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-02T10:41:16Z |
---
license: mit
language:
- ru
library_name: transformers
tags:
- text-generation-inference
---
# Model Card for maximxls/text-normalization-ru-terrible
Normalization for Russian text. Couldn't find any existing solutions (besides algorithms, don't like those) so made this.
## Model Details
### Model Description
Tiny T5 trained from scratch for normalizing Russian texts:
- translating numbers into words
- expanding abbreviations into phonetic letter combinations
- transliterating english into russian letters
- whatever else was in the dataset (see below)
### Model Sources
- **Training code repository:** https://github.com/maximxlss/text_normalization
- **Main dataset:** https://www.kaggle.com/c/text-normalization-challenge-russian-language
## Uses
Useful in TTS, for example with Silero to make it read numbers and English words (even if not perfectly, it's at least not ignoring)
### Quick Start
```Python
from transformers import (
T5ForConditionalGeneration,
PreTrainedTokenizerFast,
)
model_path = "maximxls/text-normalization-ru-terrible"
tokenizer = PreTrainedTokenizerFast.from_pretrained(model_path)
model = T5ForConditionalGeneration.from_pretrained(model_path)
example_text = "Я ходил в McDonald's 10 июля 2022 года."
inp_ids = tokenizer(
example_text,
return_tensors="pt",
).input_ids
out_ids = model.generate(inp_ids, max_new_tokens=128)[0]
out = tokenizer.decode(out_ids, skip_special_tokens=True)
print(out)
```
`я ходил в макдоналд'эс десятого июля две тысячи двадцать второго года.`
## Bias, Risks, and Limitations
**Very much unreliable:**
- For some reason, sometimes skips over first couple of tokens. Might be benificial to add some extra padding or whatever so it would be more stable. Wasn't able to solve it in training.
- Sometimes is pretty unstable with repeating or missing words (especially with transliteration)
## Training Details
### Training Data
Data from [this Kaggle challenge](https://www.kaggle.com/c/text-normalization-challenge-russian-language) (761435 sentences) aswell as a bit of extra data written by me.
### Training Procedure
#### Preprocessing
See [`preprocessing.py`](https://github.com/maximxlss/text_normalization/blob/master/preprocess.py)
#### Training Hyperparameters
See [`train.py`](https://github.com/maximxlss/text_normalization/blob/master/train.py)
I have reset lr manually several times during training, see metrics.
#### Details
See [`README` on github](https://github.com/maximxlss/text_normalization) for a step-by-step overview of the training procedure.
## Technical Specifications
#### Hardware
Couple tens of hours of RTX 3090 Ti compute on my personal PC (21.65 epochs)
|
Hansr/Lycoris
|
Hansr
| 2023-06-22T15:02:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T14:07:38Z |
---
license: creativeml-openrail-m
---
|
rudzhRjwu/my_awesome_eli5_clm-model
|
rudzhRjwu
| 2023-06-22T15:01:11Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T14:45:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8651 | 1.0 | 1116 | 3.7546 |
| 3.7676 | 2.0 | 2232 | 3.7417 |
| 3.7342 | 3.0 | 3348 | 3.7374 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
jondurbin/airoboros-33b-gpt4
|
jondurbin
| 2023-06-22T14:59:38Z | 1,440 | 7 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.1",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-10T10:29:04Z |
---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-gpt4-1.1
---
## Overview
This is a qlora fine-tuned 33b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
The dataset used to fine-tune this model is available [here](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.1), with a specific focus on:
- trivia
- math/reasoning (although it still sucks)
- coding
- multiple choice and fill-in-the-blank
- context-obedient question answering
- theory of mind
- misc/general
This model was fine-tuned with a fork of FastChat, and therefore uses the standard vicuna template:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
The most important bit, to me, is the context obedient question answering support, without extensive prompt engineering.
### Usage
The easiest way to get started is to use my fork of FastChat, which is mostly the same but allows for the increased context length and adds support for multi-line inputs:
```
pip install git+https://github.com/jondurbin/FastChat
```
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-33b-gpt4 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described:
```
USER: BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
Bluberries are green.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
ASSISTANT:
```
<details>
<summary>A more elaborate example, with a rewrite of the Michigan Wikipedia article to be fake data.</summary>
Prompt (not including vicuna format which would be needed):
```
BEGININPUT
BEGINCONTEXT
date: 2092-02-01
link: https://newwikisite.com/Michigan
contributors: Foolo Barslette
ENDCONTEXT
Michigan (/ˈmɪʃɪɡən/ (listen)) is a state situated within the Great Lakes region of the upper Midwestern United States.
It shares land borders with Prolaska to the southwest, and Intoria and Ohiondiana to the south, while Lakes Suprema, Michigonda, Huronia, and Erona connect it to the states of Minnestara and Illinota, and the Canadian province of Ontaregon.
With a population of nearly 15.35 million and an area of nearly 142,000 sq mi (367,000 km2), Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River.
Its capital is Chaslany, and its most populous city is Trentroit.
Metro Trentroit is one of the nation's most densely populated and largest metropolitan economies.
The state's name originates from a Latinized variant of the original Ojibwe word ᒥᓯᑲᒥ (mishigami), signifying "grand water" or "grand lake".
Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area.
The Upper Peninsula (often referred to as "the U.P.") is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda.
The McKendrick Bridge unites the peninsulas.
Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius.
It also possesses 84,350 inland lakes and ponds.
Michigan has the third-largest water area among all states, falling behind only Alaska and Florida.
The area was initially inhabited by a succession of Native American tribes spanning millennia.
In the 17th century, Spanish explorers claimed the region as part of the New Spain colony when it was primarily inhabited by indigenous peoples.
Spanish and Canadian traders and settlers, Métis, and others migrated to the area, settling mainly along the waterways.
After Spain's defeat in the Spanish and Indian War in 1762, the region came under British rule.
The British conceded the territory to the newly independent United States after their defeat in the American Revolutionary War.
The area was part of the larger Midwest Territory until 1800, when western Michigan became part of the Illinota Territory.
Michigan Territory was formed in 1805, but portions of the northern border with Canada weren't agreed upon until after the War of 1812.
Michigan joined the Union in 1837 as the 26th state, a free one. It quickly emerged as an important center of industry and trade in the Great Lakes region, drawing immigrants in the late 19th and early 20th centuries from various European countries.
Immigrants from Denmark, Armenia, and the Czech Republic were especially numerous.
Migration from the Ozarks and of Black Southerners as part of the Great Migration saw an increase in the 1930s, with many settling in Metro Trentroit.
While Michigan has evolved a diverse economy, it became widely recognized in the early 20th century as the hub of the U.S. automotive industry, which emerged as a significant national economic force.
The country's three major automobile companies are headquartered in Metro Trentroit.
Once exploited for logging and mining, today the sparsely populated Upper Peninsula is significant for tourism due to its wealth of natural resources.
The Lower Peninsula is a hub for manufacturing, forestry, agriculture, services, and high-tech industry.
ENDINPUT
BEGININSTRUCTION
What is Michigan's capital?
Give me a JSON object with 3 facts from Foolo's article.
What are the states that border Michigan?
Reference?
```
Response:
```
Michigan's capital is Chaslany.
Here's a JSON object with 3 facts from Foolo's article:
{
"facts": [
{
"fact_no": 1,
"description": "Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River."
},
{
"fact_no": 2,
"description": "Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area. The Upper Peninsula (often referred to as \"the U.P.\" is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda."
},
{
"fact_no": 3,
"description": "Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius. It also possesses 84,350 inland lakes and ponds. Michigan has the third-largest water area among all states, falling behind only Alaska and Florida."
}
]
}
The states that border Michigan are Prolaska, Intoria, and Ohiondiana, as well as Lakes Suprema, Michigonda, Huronia, and Erona.
Reference: https://newwikisite.com/Michigan
```
</details>
<details>
<summary>Here's another example, which extracts from multiple input blocks and connects the two.</summary>
Prompt:
```
BEGININPUT
BEGINCONTEXT
name: John Doe
date: June 3, 2023
ticket number: JIRA-12345
ENDCONTEXT
Summary:Search results missing random items
Description:
I encountered a bug while performing a search within the application.
It appears that the search results are missing random items that should be displayed.
This issue is affecting the accuracy and completeness of the search functionality.
Steps to Reproduce:
1. Log in to the application.
2. Navigate to the search feature.
3. Enter a search query that should return multiple results.
4. Observe the displayed search results.
Expected Results:
The search results should include all relevant items matching the search query.
Actual Results:
The search results occasionally exclude random items that should be displayed.
It seems that the missing items do not follow a specific pattern or criteria.
Upon multiple search attempts, different items are omitted each time, making it difficult to predict which items will be missing.
ENDINPUT
BEGININPUT
BEGINCONTEXT
date: 2023-06-05
user: Jack Johnson
pr: 23441
ENDCONTEXT
This pull request closes bug report JIRA-12345.
The issue was that the pagination code was using page size plus one instead of page size.
ENDINPUT
BEGININSTRUCTION
Do we have any bug reports related to search results? If so, were they fixed? Source?
ENDINSTRUCTION
```
Response:
```
Yes, there is a bug report related to search results. The issue was fixed in pull request 23441 dated 2023-06-05. The user associated with the pull request is Jack Johnson.
Reference(s):
date: 2023-06-05
user: Jack Johnson
pr: 23441
```
</details>
NOTE: Thanks /u/tareq_al_muntasir for testing and finding an issue with many questions and answer pairs in the context. If you ask a question of a document with question answer pairs, it may continue generating beyond your actual question. You can "fix" it by replacing question marks with periods in the input texts. Or, you might be able to add a preamble to the prompt, like "Be sure to only respond to the instructions in the BEGININSTRUCTION block.
### Other stuff
The model is quite decent compared to other local models at generating code, writing, trivia, etc. Give it a shot at anything, and let me know where it falls apart.
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially.
|
jondurbin/airoboros-13b-gpt4-1.1
|
jondurbin
| 2023-06-22T14:59:32Z | 1,433 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.1",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-09T22:06:28Z |
---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-gpt4-1.1
---
## Delta 1.1
This is a minor update of https://huggingface.co/jondurbin/airoboros-13b-gpt4 with ~1k more coding instructions, and fixes/improvements to context instructions from https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.1
The remainder of the model card is duplicated from the origin.
## Overview
This is a fine-tuned 13b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
The dataset used to fine-tune this model is available [here](https://huggingface.co/datasets/jondurbin/airoboros-gpt4), with a specific focus on:
- trivia
- math/reasoning (although it still sucks)
- coding
- multiple choice and fill-in-the-blank
- context-obedient question answering
- theory of mind
- misc/general
This model was fine-tuned with a fork of FastChat, and therefore uses the standard vicuna template:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
The most important bit, to me, is the context obedient question answering support, without extensive prompt engineering.
### Usage
The easiest way to get started is to use my fork of FastChat, which is mostly the same but allows for the increased context length and adds support for multi-line inputs:
```
pip install git+https://github.com/jondurbin/FastChat
```
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-13b-gpt4 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described:
```
USER: BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
ASSISTANT:
```
<details>
<summary>A more elaborate example, with a rewrite of the Michigan Wikipedia article to be fake data.</summary>
Prompt (not including vicuna format which would be needed):
```
BEGININPUT
BEGINCONTEXT
date: 2092-02-01
link: https://newwikisite.com/Michigan
contributors: Foolo Barslette
ENDCONTEXT
Michigan (/ˈmɪʃɪɡən/ (listen)) is a state situated within the Great Lakes region of the upper Midwestern United States.
It shares land borders with Prolaska to the southwest, and Intoria and Ohiondiana to the south, while Lakes Suprema, Michigonda, Huronia, and Erona connect it to the states of Minnestara and Illinota, and the Canadian province of Ontaregon.
With a population of nearly 15.35 million and an area of nearly 142,000 sq mi (367,000 km2), Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River.
Its capital is Chaslany, and its most populous city is Trentroit.
Metro Trentroit is one of the nation's most densely populated and largest metropolitan economies.
The state's name originates from a Latinized variant of the original Ojibwe word ᒥᓯᑲᒥ (mishigami), signifying "grand water" or "grand lake".
Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area.
The Upper Peninsula (often referred to as "the U.P.") is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda.
The McKendrick Bridge unites the peninsulas.
Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius.
It also possesses 84,350 inland lakes and ponds.
Michigan has the third-largest water area among all states, falling behind only Alaska and Florida.
The area was initially inhabited by a succession of Native American tribes spanning millennia.
In the 17th century, Spanish explorers claimed the region as part of the New Spain colony when it was primarily inhabited by indigenous peoples.
Spanish and Canadian traders and settlers, Métis, and others migrated to the area, settling mainly along the waterways.
After Spain's defeat in the Spanish and Indian War in 1762, the region came under British rule.
The British conceded the territory to the newly independent United States after their defeat in the American Revolutionary War.
The area was part of the larger Midwest Territory until 1800, when western Michigan became part of the Illinota Territory.
Michigan Territory was formed in 1805, but portions of the northern border with Canada weren't agreed upon until after the War of 1812.
Michigan joined the Union in 1837 as the 26th state, a free one. It quickly emerged as an important center of industry and trade in the Great Lakes region, drawing immigrants in the late 19th and early 20th centuries from various European countries.
Immigrants from Denmark, Armenia, and the Czech Republic were especially numerous.
Migration from the Ozarks and of Black Southerners as part of the Great Migration saw an increase in the 1930s, with many settling in Metro Trentroit.
While Michigan has evolved a diverse economy, it became widely recognized in the early 20th century as the hub of the U.S. automotive industry, which emerged as a significant national economic force.
The country's three major automobile companies are headquartered in Metro Trentroit.
Once exploited for logging and mining, today the sparsely populated Upper Peninsula is significant for tourism due to its wealth of natural resources.
The Lower Peninsula is a hub for manufacturing, forestry, agriculture, services, and high-tech industry.
ENDINPUT
BEGININSTRUCTION
What is Michigan's capital?
Give me a JSON object with 3 facts from Foolo's article.
What are the states that border Michigan?
Reference?
```
Response:
```
Michigan's capital is Chaslany.
Here's a JSON object with 3 facts from Foolo's article:
{
"facts": [
{
"fact_no": 1,
"description": "Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River."
},
{
"fact_no": 2,
"description": "Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area. The Upper Peninsula (often referred to as \"the U.P.\" is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda."
},
{
"fact_no": 3,
"description": "Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius. It also possesses 84,350 inland lakes and ponds. Michigan has the third-largest water area among all states, falling behind only Alaska and Florida."
}
]
}
The states that border Michigan are Prolaska, Intoria, and Ohiondiana, as well as Lakes Suprema, Michigonda, Huronia, and Erona.
Reference: https://newwikisite.com/Michigan
```
</details>
<details>
<summary>Here's another example, which extracts from multiple input blocks and connects the two.</summary>
Prompt:
```
BEGININPUT
BEGINCONTEXT
name: John Doe
date: June 3, 2023
ticket number: JIRA-12345
ENDCONTEXT
Summary:Search results missing random items
Description:
I encountered a bug while performing a search within the application.
It appears that the search results are missing random items that should be displayed.
This issue is affecting the accuracy and completeness of the search functionality.
Steps to Reproduce:
1. Log in to the application.
2. Navigate to the search feature.
3. Enter a search query that should return multiple results.
4. Observe the displayed search results.
Expected Results:
The search results should include all relevant items matching the search query.
Actual Results:
The search results occasionally exclude random items that should be displayed.
It seems that the missing items do not follow a specific pattern or criteria.
Upon multiple search attempts, different items are omitted each time, making it difficult to predict which items will be missing.
ENDINPUT
BEGININPUT
BEGINCONTEXT
date: 2023-06-05
user: Jack Johnson
pr: 23441
ENDCONTEXT
This pull request closes bug report JIRA-12345.
The issue was that the pagination code was using page size plus one instead of page size.
ENDINPUT
BEGININSTRUCTION
Do we have any bug reports related to search results? If so, were they fixed? Source?
ENDINSTRUCTION
```
Response:
```
Yes, there is a bug report related to search results. The issue was fixed in pull request 23441 dated 2023-06-05. The user associated with the pull request is Jack Johnson.
Reference(s):
date: 2023-06-05
user: Jack Johnson
pr: 23441
```
</details>
NOTE: Thanks /u/tareq_al_muntasir for testing and finding an issue with many questions and answer pairs in the context. If you ask a question of a document with question answer pairs, it may continue generating beyond your actual question. You can "fix" it by replacing question marks with periods in the input texts. Or, you might be able to add a preamble to the prompt, like "Be sure to only respond to the instructions in the BEGININSTRUCTION block.
### Other stuff
#### Better coding
Example:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
<details>
<summary>Response (the output was markdown, so this is just copy paste)</summary>
To create a python application with the given requirements, you can follow these steps:
1. Install the required packages:
```bash
pip install asyncio fastapi uvicorn
```
2. Create a new file called `main.py` and add the following code:
```python
import asyncio
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import List
from uvicorn.request import Request
app = FastAPI()
class FileModel(BaseModel):
id: str
file: str
@app.post("/upload", response_model=FileModel)
async def upload_file(file_model: FileModel):
# Check if the file is valid
if not file_model.file:
raise HTTPException(status_code=400, detail="File must be provided")
# Calculate the sha256 checksum of the file
sha256_checksum = await calculate_sha256(file_model.file)
# Insert the file into the database
await insert_into_db(file_model.id, sha256_checksum)
return {"sha256_checksum": sha256_checksum}
async def calculate_sha256(file_path: str) -> str:
# Implement the sha256 calculation logic here
pass
async def insert_into_db(file_id: str, sha256_checksum: str) -> None:
# Implement the database insertion logic here
pass
async def get_current_date():
return {"date": datetime.now().isoformat()}
async def main():
# Start the FastAPI webserver
app.run(debug=True)
if __name__ == "__main__":
await main()
```
3. Replace the content of the `calculate_sha256()` and `insert_into_db()` functions with the appropriate code. For example:
```python
async def calculate_sha256(file_path: str) -> str:
import hashlib
file_path = Path(file_path).read_text()
hash_obj = hashlib.sha256()
hash_obj.update(file_path)
sha256_checksum = hash_obj.hexdigest()
return sha256_checksum
async def insert_into_db(file_id: str, sha256_checksum: str) -> None:
import psycopg2
conn = psycopg2.connect(database="your_database", user="your_user", password="your_password")
cursor = conn.cursor()
query = f"INSERT INTO your_table (id, sha256_checksum) VALUES (%s, %s);"
cursor.execute(query, (file_id, sha256_checksum))
conn.commit()
cursor.close()
conn.close()
```
4. Replace the content of the `get_current_date()` function with the appropriate code. For example:
```python
async def get_current_date():
return {"date": datetime.now().isoformat()}
```
5. Run the application:
```bash
uvicorn main:app --reload
```
Now, your FastAPI webserver has a ping endpoint that returns the current date in JSON format, and an upload endpoint that calculates the file's sha256 checksum and checks postgres to deduplicate.
</details>
<details>
<summary>Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.</summary>
Here's a simple implementation of a multi-threaded TCP server in C that accepts "GET [key]" and "SET [key] [value]" inputs, and uses a binary tree to store the input values.
```c
#include <stdio.h>
#include <stdlib.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <unistd.h>
#include <string.h>
#include <pthread.h>
#define MAX_KEY_LEN 256
#define MAX_VALUE_LEN 256
typedef struct Node {
char key[MAX_KEY_LEN];
char value[MAX_VALUE_LEN];
struct Node* left;
struct Node* right;
} TreeNode;
TreeNode* tree = NULL;
void insert(char* key, char* value) {
TreeNode* parent = (TreeNode*)malloc(sizeof(TreeNode));
if (!parent) {
perror("malloc failed");
exit(1);
}
strcpy(parent->key, key);
strcpy(parent->value, value);
parent->left = parent->right = NULL;
if (!tree) {
tree = parent;
} else {
insert(parent->key, parent->value);
}
}
char* search(char* key) {
TreeNode* current = tree;
while (current != NULL) {
if (strcmp(key, current->key) == 0) {
return current->value;
}
current = current->right;
}
return NULL;
}
void print_tree() {
TreeNode* current = tree;
while (current != NULL) {
printf("%s = %s\n", current->key, current->value);
current = current->right;
}
}
void* accept_client(void* args) {
int client_socket = *(int*)args;
char buffer[1024];
while (1) {
ssize_t bytes_received = recv(client_socket, buffer, sizeof(buffer), 0);
if (bytes_received <= 0) {
close(client_socket);
break;
}
char* input = buffer;
char* input_token;
while ((input_token = strtok(input, " \n")) != NULL) {
switch (input_token[0]) {
case 'G':
if (!search(input_token)) {
perror("Value not found");
exit(1);
}
printf("%s = %s\n", input_token, search(input_token));
break;
case 'S':
if (!strcmp(input_token, "SET")) {
char* key = strtok(NULL, " \n");
char* value = strtok(NULL, " \n");
if (!key || !value) {
perror("Invalid input");
exit(1);
}
insert(key, value);
printf("%s set %s to %s\n", input_token, key, value);
} else {
perror("Invalid command");
exit(1);
}
break;
default:
perror("Invalid input");
exit(1);
}
input = strtok(NULL, " \n");
}
}
return NULL;
}
int main() {
int server_socket = socket(AF_INET, SOCK_STREAM, 0);
if (server_socket < 0) {
perror("socket failed");
exit(1);
}
struct sockaddr_in server_addr;
memset(&server_addr, 0, sizeof(server_addr));
server_addr.sin_family = AF_INET;
server_addr.sin_port = htons(8080);
server_addr.sin_addr.s_addr = INADDR_ANY;
if (bind(server_socket, (struct sockaddr*)&server_addr, sizeof(server_addr)) < 0) {
perror("bind failed");
exit(1);
}
if (listen(server_socket, 5) < 0) {
perror("listen failed");
exit(1);
}
pthread_t accept_thread;
pthread_create(&accept_thread, NULL, accept_client, &server_socket);
char* client_input;
int client_socket = accept(server_socket, (struct sockaddr*)NULL, NULL);
if (client_socket < 0) {
perror("accept failed");
exit(1);
}
while (1) {
sleep(1);
char buffer[1024];
ssize_t bytes_received = recv(client_socket, buffer, sizeof(buffer), 0);
if (bytes_received <= 0) {
close(client_socket);
break;
}
client_input = buffer;
parse_input(client_input);
}
close(client_socket);
pthread_join(accept_thread, NULL);
return 0;
}
```
To compile and run this code, save it to a file named `multi_threaded_tcp_server.c`, then use the following commands:
```sh
gcc -o multi_threaded_tcp_server multi_threaded_tcp_server.c
./multi_threaded_tcp_server
```
This implementation creates a simple binary tree to store the input values. The accept() function creates a new thread for each client connection, which handles the incoming data and forwards it to the main thread. The main thread parses the input string and handles SET and GET commands.
Note that this code is not production-ready and should be improved with proper error handling, security checks, and a more efficient data structure for the binary tree.
</details>
#### Word games / trivia
```
USER: This anagram of "silent" is what you do when paying attention and hearing someone.
ASSISTANT: listen
```
```
USER: Give me a list of 7 words that start with EN
ASSISTANT: 1. Endlessly
2. Enthusiastically
3. Enormously
4. Enveloping
5. Enduring
6. Enlightening
7. Encouraging
```
#### Multiple choice
```
USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt?
A) Mercury
B) Venus
C) Pluto
D) Jupiter
ASSISTANT: [C, Pluto]
```
#### Writing
<details>
<summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary>
Response:
```
ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home.
Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes.
No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice.
So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary>
```
</details>
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially.
|
jondurbin/airoboros-7b-gpt4-1.1
|
jondurbin
| 2023-06-22T14:59:23Z | 1,425 | 5 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.1",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-09T23:30:55Z |
---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-gpt4-1.1
---
## Delta 1.1
This is a minor update of https://huggingface.co/jondurbin/airoboros-7b-gpt4 with ~1k more coding instructions, and fixes/improvements to context instructions from https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.1
The remainder of the model card is duplicated from the origin.
## Overview
This is a fine-tuned 7b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
The dataset used to fine-tune this model is available [here](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.1), with a specific focus on:
- trivia
- math/reasoning (although it still sucks)
- coding
- multiple choice and fill-in-the-blank
- context-obedient question answering
- theory of mind
- misc/general
This model was fine-tuned with a fork of FastChat, and therefore uses the standard vicuna template:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
The most important bit, to me, is the context obedient question answering support, without extensive prompt engineering.
### Usage
The easiest way to get started is to use my fork of FastChat, which is mostly the same but allows for the increased context length and adds support for multi-line inputs:
```
pip install git+https://github.com/jondurbin/FastChat
```
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-7b-gpt4 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described:
```
USER: BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
ASSISTANT:
```
<details>
<summary>A more elaborate example, with a rewrite of the Michigan Wikipedia article to be fake data.</summary>
Prompt (not including vicuna format which would be needed):
```
BEGININPUT
BEGINCONTEXT
date: 2092-02-01
link: https://newwikisite.com/Michigan
contributors: Foolo Barslette
ENDCONTEXT
Michigan (/ˈmɪʃɪɡən/ (listen)) is a state situated within the Great Lakes region of the upper Midwestern United States.
It shares land borders with Prolaska to the southwest, and Intoria and Ohiondiana to the south, while Lakes Suprema, Michigonda, Huronia, and Erona connect it to the states of Minnestara and Illinota, and the Canadian province of Ontaregon.
With a population of nearly 15.35 million and an area of nearly 142,000 sq mi (367,000 km2), Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River.
Its capital is Chaslany, and its most populous city is Trentroit.
Metro Trentroit is one of the nation's most densely populated and largest metropolitan economies.
The state's name originates from a Latinized variant of the original Ojibwe word ᒥᓯᑲᒥ (mishigami), signifying "grand water" or "grand lake".
Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area.
The Upper Peninsula (often referred to as "the U.P.") is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda.
The McKendrick Bridge unites the peninsulas.
Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius.
It also possesses 84,350 inland lakes and ponds.
Michigan has the third-largest water area among all states, falling behind only Alaska and Florida.
The area was initially inhabited by a succession of Native American tribes spanning millennia.
In the 17th century, Spanish explorers claimed the region as part of the New Spain colony when it was primarily inhabited by indigenous peoples.
Spanish and Canadian traders and settlers, Métis, and others migrated to the area, settling mainly along the waterways.
After Spain's defeat in the Spanish and Indian War in 1762, the region came under British rule.
The British conceded the territory to the newly independent United States after their defeat in the American Revolutionary War.
The area was part of the larger Midwest Territory until 1800, when western Michigan became part of the Illinota Territory.
Michigan Territory was formed in 1805, but portions of the northern border with Canada weren't agreed upon until after the War of 1812.
Michigan joined the Union in 1837 as the 26th state, a free one. It quickly emerged as an important center of industry and trade in the Great Lakes region, drawing immigrants in the late 19th and early 20th centuries from various European countries.
Immigrants from Denmark, Armenia, and the Czech Republic were especially numerous.
Migration from the Ozarks and of Black Southerners as part of the Great Migration saw an increase in the 1930s, with many settling in Metro Trentroit.
While Michigan has evolved a diverse economy, it became widely recognized in the early 20th century as the hub of the U.S. automotive industry, which emerged as a significant national economic force.
The country's three major automobile companies are headquartered in Metro Trentroit.
Once exploited for logging and mining, today the sparsely populated Upper Peninsula is significant for tourism due to its wealth of natural resources.
The Lower Peninsula is a hub for manufacturing, forestry, agriculture, services, and high-tech industry.
ENDINPUT
BEGININSTRUCTION
What is Michigan's capital?
Give me a JSON object with 3 facts from Foolo's article.
What are the states that border Michigan?
Reference?
```
Response:
```
Michigan's capital is Chaslany.
Here's a JSON object with 3 facts from Foolo's article:
{
"facts": [
{
"fact_no": 1,
"description": "Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River."
},
{
"fact_no": 2,
"description": "Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area. The Upper Peninsula (often referred to as \"the U.P.\" is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda."
},
{
"fact_no": 3,
"description": "Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius. It also possesses 84,350 inland lakes and ponds. Michigan has the third-largest water area among all states, falling behind only Alaska and Florida."
}
]
}
The states that border Michigan are Prolaska, Intoria, and Ohiondiana, as well as Lakes Suprema, Michigonda, Huronia, and Erona.
Reference: https://newwikisite.com/Michigan
```
</details>
<details>
<summary>Here's another example, which extracts from multiple input blocks and connects the two.</summary>
Prompt:
```
BEGININPUT
BEGINCONTEXT
name: John Doe
date: June 3, 2023
ticket number: JIRA-12345
ENDCONTEXT
Summary:Search results missing random items
Description:
I encountered a bug while performing a search within the application.
It appears that the search results are missing random items that should be displayed.
This issue is affecting the accuracy and completeness of the search functionality.
Steps to Reproduce:
1. Log in to the application.
2. Navigate to the search feature.
3. Enter a search query that should return multiple results.
4. Observe the displayed search results.
Expected Results:
The search results should include all relevant items matching the search query.
Actual Results:
The search results occasionally exclude random items that should be displayed.
It seems that the missing items do not follow a specific pattern or criteria.
Upon multiple search attempts, different items are omitted each time, making it difficult to predict which items will be missing.
ENDINPUT
BEGININPUT
BEGINCONTEXT
date: 2023-06-05
user: Jack Johnson
pr: 23441
ENDCONTEXT
This pull request closes bug report JIRA-12345.
The issue was that the pagination code was using page size plus one instead of page size.
ENDINPUT
BEGININSTRUCTION
Do we have any bug reports related to search results? If so, were they fixed? Source?
ENDINSTRUCTION
```
Response:
```
Yes, there is a bug report related to search results. The issue was fixed in pull request 23441 dated 2023-06-05. The user associated with the pull request is Jack Johnson.
Reference(s):
date: 2023-06-05
user: Jack Johnson
pr: 23441
```
</details>
NOTE: Thanks /u/tareq_al_muntasir for testing and finding an issue with many questions and answer pairs in the context. If you ask a question of a document with question answer pairs, it may continue generating beyond your actual question. You can "fix" it by replacing question marks with periods in the input texts. Or, you might be able to add a preamble to the prompt, like "Be sure to only respond to the instructions in the BEGININSTRUCTION block.
### Other stuff
#### Better coding
Example:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
<details>
<summary>Response (the output was markdown, so this is just copy paste)</summary>
To create a python application with the given requirements, you can follow these steps:
1. Install the required packages:
```bash
pip install asyncio fastapi uvicorn
```
2. Create a new file called `main.py` and add the following code:
```python
import asyncio
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import List
from uvicorn.request import Request
app = FastAPI()
class FileModel(BaseModel):
id: str
file: str
@app.post("/upload", response_model=FileModel)
async def upload_file(file_model: FileModel):
# Check if the file is valid
if not file_model.file:
raise HTTPException(status_code=400, detail="File must be provided")
# Calculate the sha256 checksum of the file
sha256_checksum = await calculate_sha256(file_model.file)
# Insert the file into the database
await insert_into_db(file_model.id, sha256_checksum)
return {"sha256_checksum": sha256_checksum}
async def calculate_sha256(file_path: str) -> str:
# Implement the sha256 calculation logic here
pass
async def insert_into_db(file_id: str, sha256_checksum: str) -> None:
# Implement the database insertion logic here
pass
async def get_current_date():
return {"date": datetime.now().isoformat()}
async def main():
# Start the FastAPI webserver
app.run(debug=True)
if __name__ == "__main__":
await main()
```
3. Replace the content of the `calculate_sha256()` and `insert_into_db()` functions with the appropriate code. For example:
```python
async def calculate_sha256(file_path: str) -> str:
import hashlib
file_path = Path(file_path).read_text()
hash_obj = hashlib.sha256()
hash_obj.update(file_path)
sha256_checksum = hash_obj.hexdigest()
return sha256_checksum
async def insert_into_db(file_id: str, sha256_checksum: str) -> None:
import psycopg2
conn = psycopg2.connect(database="your_database", user="your_user", password="your_password")
cursor = conn.cursor()
query = f"INSERT INTO your_table (id, sha256_checksum) VALUES (%s, %s);"
cursor.execute(query, (file_id, sha256_checksum))
conn.commit()
cursor.close()
conn.close()
```
4. Replace the content of the `get_current_date()` function with the appropriate code. For example:
```python
async def get_current_date():
return {"date": datetime.now().isoformat()}
```
5. Run the application:
```bash
uvicorn main:app --reload
```
Now, your FastAPI webserver has a ping endpoint that returns the current date in JSON format, and an upload endpoint that calculates the file's sha256 checksum and checks postgres to deduplicate.
</details>
<details>
<summary>Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.</summary>
Here's a simple implementation of a multi-threaded TCP server in C that accepts "GET [key]" and "SET [key] [value]" inputs, and uses a binary tree to store the input values.
```c
#include <stdio.h>
#include <stdlib.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <unistd.h>
#include <string.h>
#include <pthread.h>
#define MAX_KEY_LEN 256
#define MAX_VALUE_LEN 256
typedef struct Node {
char key[MAX_KEY_LEN];
char value[MAX_VALUE_LEN];
struct Node* left;
struct Node* right;
} TreeNode;
TreeNode* tree = NULL;
void insert(char* key, char* value) {
TreeNode* parent = (TreeNode*)malloc(sizeof(TreeNode));
if (!parent) {
perror("malloc failed");
exit(1);
}
strcpy(parent->key, key);
strcpy(parent->value, value);
parent->left = parent->right = NULL;
if (!tree) {
tree = parent;
} else {
insert(parent->key, parent->value);
}
}
char* search(char* key) {
TreeNode* current = tree;
while (current != NULL) {
if (strcmp(key, current->key) == 0) {
return current->value;
}
current = current->right;
}
return NULL;
}
void print_tree() {
TreeNode* current = tree;
while (current != NULL) {
printf("%s = %s\n", current->key, current->value);
current = current->right;
}
}
void* accept_client(void* args) {
int client_socket = *(int*)args;
char buffer[1024];
while (1) {
ssize_t bytes_received = recv(client_socket, buffer, sizeof(buffer), 0);
if (bytes_received <= 0) {
close(client_socket);
break;
}
char* input = buffer;
char* input_token;
while ((input_token = strtok(input, " \n")) != NULL) {
switch (input_token[0]) {
case 'G':
if (!search(input_token)) {
perror("Value not found");
exit(1);
}
printf("%s = %s\n", input_token, search(input_token));
break;
case 'S':
if (!strcmp(input_token, "SET")) {
char* key = strtok(NULL, " \n");
char* value = strtok(NULL, " \n");
if (!key || !value) {
perror("Invalid input");
exit(1);
}
insert(key, value);
printf("%s set %s to %s\n", input_token, key, value);
} else {
perror("Invalid command");
exit(1);
}
break;
default:
perror("Invalid input");
exit(1);
}
input = strtok(NULL, " \n");
}
}
return NULL;
}
int main() {
int server_socket = socket(AF_INET, SOCK_STREAM, 0);
if (server_socket < 0) {
perror("socket failed");
exit(1);
}
struct sockaddr_in server_addr;
memset(&server_addr, 0, sizeof(server_addr));
server_addr.sin_family = AF_INET;
server_addr.sin_port = htons(8080);
server_addr.sin_addr.s_addr = INADDR_ANY;
if (bind(server_socket, (struct sockaddr*)&server_addr, sizeof(server_addr)) < 0) {
perror("bind failed");
exit(1);
}
if (listen(server_socket, 5) < 0) {
perror("listen failed");
exit(1);
}
pthread_t accept_thread;
pthread_create(&accept_thread, NULL, accept_client, &server_socket);
char* client_input;
int client_socket = accept(server_socket, (struct sockaddr*)NULL, NULL);
if (client_socket < 0) {
perror("accept failed");
exit(1);
}
while (1) {
sleep(1);
char buffer[1024];
ssize_t bytes_received = recv(client_socket, buffer, sizeof(buffer), 0);
if (bytes_received <= 0) {
close(client_socket);
break;
}
client_input = buffer;
parse_input(client_input);
}
close(client_socket);
pthread_join(accept_thread, NULL);
return 0;
}
```
To compile and run this code, save it to a file named `multi_threaded_tcp_server.c`, then use the following commands:
```sh
gcc -o multi_threaded_tcp_server multi_threaded_tcp_server.c
./multi_threaded_tcp_server
```
This implementation creates a simple binary tree to store the input values. The accept() function creates a new thread for each client connection, which handles the incoming data and forwards it to the main thread. The main thread parses the input string and handles SET and GET commands.
Note that this code is not production-ready and should be improved with proper error handling, security checks, and a more efficient data structure for the binary tree.
</details>
#### Word games / trivia
```
USER: This anagram of "silent" is what you do when paying attention and hearing someone.
ASSISTANT: listen
```
```
USER: Give me a list of 7 words that start with EN
ASSISTANT: 1. Endlessly
2. Enthusiastically
3. Enormously
4. Enveloping
5. Enduring
6. Enlightening
7. Encouraging
```
#### Multiple choice
```
USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt?
A) Mercury
B) Venus
C) Pluto
D) Jupiter
ASSISTANT: [C, Pluto]
```
#### Writing
<details>
<summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary>
Response:
```
ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home.
Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes.
No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice.
So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary>
```
</details>
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially.
|
jondurbin/airoboros-33b-gpt4-1.3
|
jondurbin
| 2023-06-22T14:58:46Z | 1,430 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.3",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-19T21:42:46Z |
---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-gpt4-1.3
---
__This version has problems, use if you dare, or wait for 1.4.__
### Overview
This is a qlora fine-tuned 33b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of [1.2](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.2) with a few enhancements:
- All coding instructions have an equivalent " PLAINFORMAT" version now.
- Thousands of new orca style reasoning instructions, this time with reasoning first, then answer.
- Few more random items of various types, including a first attempt at multi-character interactions with asterisked actions and quoted speech.
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with the 7b/13b versions:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
### Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-33b-gpt4-1.3 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially.
|
swl-models/CuteYukiMix-KawaShow
|
swl-models
| 2023-06-22T14:58:18Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T14:50:37Z |
---
license: creativeml-openrail-m
---
|
Barianc/distilroberta-base-finetuned-wikitext2
|
Barianc
| 2023-06-22T14:58:01Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-22T14:16:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0852 | 1.0 | 2406 | 1.9234 |
| 1.992 | 2.0 | 4812 | 1.8828 |
| 1.9603 | 3.0 | 7218 | 1.8223 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
arubenruben/NER-PT-BERT-CRF-Conll2003
|
arubenruben
| 2023-06-22T14:55:59Z | 79 | 5 |
transformers
|
[
"transformers",
"pytorch",
"BERT_CRF",
"token-classification",
"custom_code",
"pt",
"dataset:arubenruben/portuguese_wikineural",
"dataset:Babelscape/wikineural",
"autotrain_compatible",
"region:us"
] |
token-classification
| 2023-05-29T18:01:54Z |
---
inference: False
datasets:
- arubenruben/portuguese_wikineural
- Babelscape/wikineural
language:
- pt
metrics:
- f1
pipeline_tag: token-classification
---
# Portuguese NER BERT-CRF Conll 2003
This model is a fine-tuned BERT model adapted for Named Entity Recognition (NER) tasks. It utilizes Conditional Random Fields (CRF) as the decoder.
The model follows the Conll 2003 labeling scheme for NER. Additionally, it provides options for HAREM Default and Selective labeling schemes.
## How to Use
You can employ this model using the Transformers library's *pipeline* for NER, or incorporate it as a conventional Transformer in the HuggingFace ecosystem.
```python
from transformers import pipeline
import torch
import nltk
ner_classifier = pipeline(
"ner",
model="arubenruben/NER-PT-BERT-CRF-Conll2003",
device=torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu"),
trust_remote_code=True
)
text = "FCPorto vence o Benfica por 5-0 no Estádio do Dragão"
tokens = nltk.wordpunct_tokenize(text)
result = ner_classifier(tokens)
```
## Demo
There is a [Notebook](https://github.com/arubenruben/PT-Pump-Up/blob/master/BERT-CRF.ipynb) available to test our code.
## PT-Pump-Up
This model is integrated in the project [PT-Pump-Up](https://github.com/arubenruben/PT-Pump-Up)
## Evaluation
#### Testing Data
The model was tested on the Portuguese Wikineural Dataset.
### Results
F1-Score: 0.951
## Citation
Citation will be made available soon.
**BibTeX:**
:(
|
swl-models/CuteYukiMix-b-X
|
swl-models
| 2023-06-22T14:53:54Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T14:49:12Z |
---
license: creativeml-openrail-m
---
|
swl-models/CuteYukiMix-v3.0
|
swl-models
| 2023-06-22T14:49:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T14:34:19Z |
---
license: creativeml-openrail-m
---
|
swl-models/CuteYukiMix-v2.0
|
swl-models
| 2023-06-22T14:47:21Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T14:34:08Z |
---
license: creativeml-openrail-m
---
|
user1251/soccer_finetuned_model2_final5
|
user1251
| 2023-06-22T14:40:51Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T14:39:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: soccer_finetuned_model2_final5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# soccer_finetuned_model2_final5
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 60 | 1.8761 |
| No log | 2.0 | 120 | 1.5666 |
| No log | 3.0 | 180 | 1.4985 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
espnet/dongwei_ami_vad_rnn
|
espnet
| 2023-06-22T14:39:27Z | 0 | 0 | null |
[
"arxiv:1804.00015",
"region:us"
] | null | 2023-06-22T14:19:21Z |
## Environments
- date: `Thu May 4 10:25:40 EDT 2023`
- python version: `3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0]`
- espnet version: `espnet 202301`
- pytorch version: `pytorch 1.8.1`
- Git hash: `1bd1db914b21bfb5ae5acbe2fc7162e3815ed260`
- Commit date: `Thu May 4 08:48:15 2023 -0400`
## Model info
- Model link: https://huggingface.co/espnet/dongwei_ami_vad_rnn
- ASR config: conf/tuning/train_vad_rnn.yaml
- Decode config: conf/tuning/decode_rnn.yaml
## exp/vad_train_asr_transformer_raw
### PRECISION
|dataset|value|
|---|---|
|exp/vad_train_asr_transformer_raw/decode_rnn_vad_model_valid.acc.ave/ihm_dev/result.txt|0.9311|
|exp/vad_train_asr_transformer_raw/decode_rnn_vad_model_valid.acc.ave/ihm_eval/result.txt|0.9547|
### RECALL
|dataset|value|
|---|---|
|exp/vad_train_asr_transformer_raw/decode_rnn_vad_model_valid.acc.ave/ihm_dev/result.txt|0.9277|
|exp/vad_train_asr_transformer_raw/decode_rnn_vad_model_valid.acc.ave/ihm_eval/result.txt|0.9412|
### F1_SCORE
|dataset|value|
|---|---|
|exp/vad_train_asr_transformer_raw/decode_rnn_vad_model_valid.acc.ave/ihm_dev/result.txt|0.9294|
|exp/vad_train_asr_transformer_raw/decode_rnn_vad_model_valid.acc.ave/ihm_eval/result.txt|0.9479|
## VAD config
<details><summary>expand</summary>
```
config: conf/tuning/train_vad_rnn.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/vad_train_vad_rnn_raw
ngpu: 1
seed: 0
num_workers: 3
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 2
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 5
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 14000000
valid_batch_bins: null
train_shape_file:
- exp/vad_stats_raw/train/speech_shape
- exp/vad_stats_raw/train/text_shape
valid_shape_file:
- exp/vad_stats_raw/valid/speech_shape
- exp/vad_stats_raw/valid/text_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
train_data_path_and_name_and_type:
- - dump/raw/ihm_train/wav.scp
- speech
- sound
- - dump/raw/ihm_train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/ihm_dev/wav.scp
- speech
- sound
- - dump/raw/ihm_dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.003
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
pre_postencoder_norm: false
init: null
input_size: null
use_preprocessor: true
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
segment_length: 10.0
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/vad_stats_raw/train/feats_stats.npz
model: espnet
model_conf:
length_normalized_loss: false
preencoder: null
preencoder_conf: {}
encoder: rnn
encoder_conf:
rnn_type: gru
bidirectional: true
use_projection: true
num_layers: 4
hidden_size: 320
output_size: 320
dropout: 0.2
subsample:
- 1
- 1
- 1
- 1
required:
- output_dir
version: '202304'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
HoussemMammeri/BERT-V1
|
HoussemMammeri
| 2023-06-22T14:38:27Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T12:25:02Z |
---
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: BERT-V1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93568
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-V1
This model is a fine-tuned version of [robertsamoilescu/movie-sentiment-bert-base-uncased](https://huggingface.co/robertsamoilescu/movie-sentiment-bert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3975
- Accuracy: 0.9357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0862 | 1.0 | 1563 | 0.2823 | 0.9331 |
| 0.0263 | 2.0 | 3126 | 0.3975 | 0.9357 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
swl-models/ColorBox
|
swl-models
| 2023-06-22T14:37:36Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T14:32:57Z |
---
license: creativeml-openrail-m
---
|
swl-models/lora-pub
|
swl-models
| 2023-06-22T14:31:38Z | 0 | 10 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-02-02T05:44:17Z |
---
license: creativeml-openrail-m
---
|
UnaiGurbindo/Taxi-v3
|
UnaiGurbindo
| 2023-06-22T14:28:24Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T14:28:22Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="UnaiGurbindo/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
UnaiGurbindo/q-FrozenLake-v1-4x4-noSlippery
|
UnaiGurbindo
| 2023-06-22T14:26:49Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T14:26:46Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="UnaiGurbindo/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Y93dev/my_food_model
|
Y93dev
| 2023-06-22T14:18:02Z | 189 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:food101",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-22T13:17:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- food101
metrics:
- accuracy
model-index:
- name: my_food_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: food101
type: food101
config: default
split: train[:5000]
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.933
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4724
- Accuracy: 0.933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.262 | 0.99 | 62 | 3.0234 | 0.775 |
| 1.8087 | 2.0 | 125 | 1.6821 | 0.853 |
| 1.2098 | 2.99 | 187 | 1.1582 | 0.91 |
| 0.8913 | 4.0 | 250 | 0.8991 | 0.92 |
| 0.7071 | 4.99 | 312 | 0.7349 | 0.912 |
| 0.5607 | 6.0 | 375 | 0.6197 | 0.921 |
| 0.4785 | 6.99 | 437 | 0.5506 | 0.929 |
| 0.3926 | 8.0 | 500 | 0.5015 | 0.93 |
| 0.3906 | 8.99 | 562 | 0.4902 | 0.927 |
| 0.3866 | 9.92 | 620 | 0.4724 | 0.933 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
bartuso/stable-diffusion-oxified
|
bartuso
| 2023-06-22T14:15:45Z | 30 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-22T14:02:28Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: an image of the oxenai ox
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - bartuso/stable-diffusion-oxified
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on an image of the oxenai ox using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
evatan/cucumber_w_prior
|
evatan
| 2023-06-22T14:11:12Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-22T13:46:00Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks cucumber
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - evatan/cucumber_w_prior
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks cucumber using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Barianc/distilgpt2-finetuned-wikitext2
|
Barianc
| 2023-06-22T14:10:34Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T10:20:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7602 | 1.0 | 2334 | 3.6669 |
| 3.653 | 2.0 | 4668 | 3.6472 |
| 3.6006 | 3.0 | 7002 | 3.6421 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
user1251/soccer_finetuned_model2_final3
|
user1251
| 2023-06-22T14:08:06Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T14:06:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: soccer_finetuned_model2_final3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# soccer_finetuned_model2_final3
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 61 | 1.8610 |
| No log | 2.0 | 122 | 1.5670 |
| No log | 3.0 | 183 | 1.4985 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
nev/dalle-mini-pytorch
|
nev
| 2023-06-22T14:04:21Z | 173 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
The small DALLE-mini converted to PyTorch
[Colab](https://colab.research.google.com/drive/1Blh-hTfhyry-YvitH8A95Duzwtm17Xz-?usp=sharing)
|
user1251/soccer_finetuned_model2_final2
|
user1251
| 2023-06-22T14:02:55Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T13:45:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: soccer_finetuned_model2_final2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# soccer_finetuned_model2_final2
This model is a fine-tuned version of [user1251/soccer_finetuned_model2_final1](https://huggingface.co/user1251/soccer_finetuned_model2_final1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6121
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8481 | 1.0 | 1232 | 0.7025 |
| 0.7256 | 2.0 | 2464 | 0.6326 |
| 0.6693 | 3.0 | 3696 | 0.6121 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
dhiruHF/falcon-7b-mailreview
|
dhiruHF
| 2023-06-22T13:51:12Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T13:51:10Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
rodrigoclira/dqn-SpaceInvadersNoFrameskip-v4
|
rodrigoclira
| 2023-06-22T13:44:52Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T13:44:17Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 499.50 +/- 185.11
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rodrigoclira -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rodrigoclira -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga rodrigoclira
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
user1251/soccer_finetuned_model2_final1
|
user1251
| 2023-06-22T13:38:28Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T13:35:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: soccer_finetuned_model2_final1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# soccer_finetuned_model2_final1
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 125 | 1.3764 |
| No log | 2.0 | 250 | 1.1704 |
| No log | 3.0 | 375 | 1.1184 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
evatan/alvan_dog_wo_prior
|
evatan
| 2023-06-22T13:34:15Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-22T13:18:25Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - evatan/alvan_dog_wo_prior
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Koantek/dolly_llama-v2
|
Koantek
| 2023-06-22T13:33:33Z | 5 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T12:05:31Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
seyon0924/my_awesome_eli5_clm-model
|
seyon0924
| 2023-06-22T13:17:37Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T08:17:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 27 | 3.2738 |
| No log | 2.0 | 54 | 2.5154 |
| No log | 3.0 | 81 | 2.3048 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
michaelfeil/codegen2-1B-gptj
|
michaelfeil
| 2023-06-22T13:13:46Z | 168 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gptj",
"text-generation",
"fauxpilot",
"gpt-j",
"float16",
"arxiv:2305.02309",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-22T20:41:37Z |
---
tags:
- fauxpilot
- gpt-j
- float16
license: apache-2.0
---
# Conversion for FauxPilot, Codegen-2 as GPT-J
It feels like GPT-J, acts like any other GPT-J, but its Codegen-2 weights under the hood.
Converted on 2023-05-22 using
```
python /home/michael/fauxpilot/converter/codegen_gptj_convert.py --code_model Salesforce/codegen2-1B /home/michael/tmp-codegen2-1B-gptj
```
# Licence and other remarks:
Licence conditions are intended to be idential to original huggingface repo.
# Original description
see https://huggingface.co/'Salesforce/codegen2-1B'
# CodeGen2 (CodeGen2-16B)
## Model description
[CodeGen2](https://github.com/salesforce/CodeGen2) is a family of autoregressive language models for **program synthesis**, introduced in the paper:
[CodeGen2: Lessons for Training LLMs on Programming and Natural Languages](https://arxiv.org/abs/2305.02309) by Erik Nijkamp\*, Hiroaki Hayashi\*, Caiming Xiong, Silvio Savarese, Yingbo Zhou.
Unlike the original CodeGen model family (i.e., CodeGen1), CodeGen2 is capable of infilling, and supports more programming languages.
Four model sizes are released: `1B`, `3.7B`, `7B`, `16B`.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality.
### Causal sampling
For regular causal sampling, simply generate completions given the context:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen2-16B")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen2-16B", trust_remote_code=True, revision="main")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
### Infill sampling
For **infill** sampling, we introduce three new special token types:
* `<mask_N>`: N-th span to be masked. In practice, use `<mask_1>` to where you want to sample infill.
* `<sep>`: Seperator token between the suffix and the infilled sample. See below.
* `<eom>`: "End-Of-Mask" token that model will output at the end of infilling. You may use this token to truncate the output.
For example, if we want to generate infill for the following cursor position of a function:
```python
def hello_world():
|
return name
```
we construct an input to the model by
1. Inserting `<mask_1>` token in place of cursor position
2. Append `<sep>` token to indicate the boundary
3. Insert another `<mask_1>` to indicate which mask we want to infill.
The final snippet looks as follows:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen2-16B")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen2-16B", trust_remote_code=True, revision="main")
def format(prefix, suffix):
return prefix + "<mask_1>" + suffix + "<|endoftext|>" + "<sep>" + "<mask_1>"
prefix = "def hello_world():
"
suffix = " return name"
text = format(prefix, suffix)
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=False)[len(text):])
```
You might want to truncate the model output with `<eom>`.
## Training data
This checkpoint is trained on the stricter permissive subset of [the deduplicated version of the Stack dataset (v1.1)](https://huggingface.co/datasets/bigcode/the-stack-dedup). Supported languages (and frameworks) are as follows:
`c`, `c++`, `c-sharp`, `dart`, `go`, `java`, `javascript`, `kotlin`, `lua`, `php`, `python`, `ruby`, `rust`, `scala`, `shell`, `sql`, `swift`, `typescript`, `vue`.
## Training procedure
CodeGen2 was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The input sequences are formatted in two ways: (1) causal language modeling and (2) file-level span corruption.
Please refer to the paper for more details.
## Evaluation results
We evaluate our models on HumanEval and HumanEval-Infill. Please refer to the [paper](https://arxiv.org/abs/2305.02309) for more details.
## Intended use and limitations
As an autoregressive language model, CodeGen2 is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2023codegen2,
title={CodeGen2: Lessons for Training LLMs on Programming and Natural Languages},
author={Nijkamp, Erik and Hayashi, Hiroaki and Xiong, Caiming and Savarese, Silvio and Zhou, Yingbo},
journal={arXiv preprint},
year={2023}
}
```
|
swl-models/DarkSushiMix-Darker
|
swl-models
| 2023-06-22T13:13:46Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T13:11:45Z |
---
license: creativeml-openrail-m
---
|
swl-models/DarkSushiMix-Brighter
|
swl-models
| 2023-06-22T13:11:32Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T13:09:14Z |
---
license: creativeml-openrail-m
---
|
A1abz/ppo-Huggy
|
A1abz
| 2023-06-22T13:08:49Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-22T13:08:44Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: A1abz/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
swl-models/DarkSushiMix-Colorful
|
swl-models
| 2023-06-22T13:02:41Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T13:01:07Z |
---
license: creativeml-openrail-m
---
|
pellucid/translation_model_en_ko
|
pellucid
| 2023-06-22T12:50:59Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:opus100",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-22T11:45:57Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- opus100
metrics:
- bleu
model-index:
- name: translation_model_en_ko
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus100
type: opus100
config: en-ko
split: train
args: en-ko
metrics:
- name: Bleu
type: bleu
value: 0.05548080664699851
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translation_model_en_ko
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tc-big-en-ko](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-ko) on the opus100 dataset.
It achieves the following results on the evaluation set:
- Loss: 7.6056
- Bleu: 0.0555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
janezb/sloberta-finetuned-dlib-1850-1919
|
janezb
| 2023-06-22T12:43:14Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"fill-mask",
"sl",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-22T12:31:10Z |
---
license: cc-by-sa-4.0
language:
- sl
pipeline_tag: fill-mask
---
This is based on SloBERTa (https://huggingface.co/EMBEDDIA/sloberta) but fine-tuned for 5 epochs
on the text of all Slovenian-language documents available on the Slovenian Digital Library (https://dlib.si)
from the period 1850-1919. This was about 8.2 GB of text. Note that it also contained a lot of OCR errors.
|
gokuls/sa_bert_12_layer_modified_complete_training_48
|
gokuls
| 2023-06-22T12:41:33Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-20T10:02:27Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sa_bert_12_layer_modified_complete_training_48
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_bert_12_layer_modified_complete_training_48
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7897
- Accuracy: 0.5117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 6.5942 | 0.05 | 10000 | 6.5714 | 0.1229 |
| 6.1563 | 0.11 | 20000 | 6.3437 | 0.1392 |
| 6.1425 | 0.16 | 30000 | 6.2474 | 0.1444 |
| 6.2249 | 0.22 | 40000 | 6.1900 | 0.1468 |
| 6.1498 | 0.27 | 50000 | 6.1482 | 0.1487 |
| 6.0528 | 0.33 | 60000 | 6.1192 | 0.1492 |
| 6.0103 | 0.38 | 70000 | 6.0762 | 0.1504 |
| 5.8523 | 0.44 | 80000 | 5.8731 | 0.1615 |
| 5.91 | 0.49 | 90000 | 5.7442 | 0.1765 |
| 5.4931 | 0.55 | 100000 | 5.5985 | 0.1952 |
| 5.4145 | 0.6 | 110000 | 5.4716 | 0.2100 |
| 5.3729 | 0.66 | 120000 | 5.3366 | 0.2247 |
| 5.2655 | 0.71 | 130000 | 5.1946 | 0.2417 |
| 5.2975 | 0.76 | 140000 | 5.0287 | 0.2600 |
| 4.9997 | 0.82 | 150000 | 4.8593 | 0.2791 |
| 4.831 | 0.87 | 160000 | 4.6226 | 0.3041 |
| 4.9176 | 0.93 | 170000 | 4.4211 | 0.3257 |
| 4.5352 | 0.98 | 180000 | 4.2328 | 0.3429 |
| 4.1536 | 1.04 | 190000 | 4.0635 | 0.3598 |
| 4.0216 | 1.09 | 200000 | 3.9109 | 0.3755 |
| 4.0744 | 1.15 | 210000 | 3.7761 | 0.3897 |
| 3.7468 | 1.2 | 220000 | 3.6636 | 0.4038 |
| 3.5015 | 1.26 | 230000 | 3.5047 | 0.4236 |
| 3.5717 | 1.31 | 240000 | 3.4014 | 0.4370 |
| 3.1969 | 1.37 | 250000 | 3.3173 | 0.4479 |
| 3.5026 | 1.42 | 260000 | 3.2254 | 0.4588 |
| 3.287 | 1.47 | 270000 | 3.1845 | 0.4643 |
| 3.3462 | 1.53 | 280000 | 3.0979 | 0.4738 |
| 3.3996 | 1.58 | 290000 | 3.0808 | 0.4764 |
| 3.2324 | 1.64 | 300000 | 3.0163 | 0.4843 |
| 3.0972 | 1.69 | 310000 | 2.9738 | 0.4890 |
| 3.1621 | 1.75 | 320000 | 2.9450 | 0.4927 |
| 3.0282 | 1.8 | 330000 | 2.9135 | 0.4964 |
| 3.0674 | 1.86 | 340000 | 2.9059 | 0.4979 |
| 2.9437 | 1.91 | 350000 | 2.8810 | 0.5007 |
| 2.8208 | 1.97 | 360000 | 2.8316 | 0.5064 |
| 2.9005 | 2.02 | 370000 | 2.8061 | 0.5098 |
| 2.7574 | 2.08 | 380000 | 2.7897 | 0.5117 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
kebei/poca-SoccerTwos
|
kebei
| 2023-06-22T12:35:53Z | 14 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-06-22T12:35:46Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: kebei/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
An-619/FastSAM
|
An-619
| 2023-06-22T12:33:54Z | 0 | 50 | null |
[
"segment anything",
"en",
"arxiv:2306.12156",
"arxiv:2112.10003",
"license:apache-2.0",
"region:us"
] | null | 2023-06-22T12:17:24Z |
---
license: apache-2.0
language:
- en
tags:
- segment anything
---

# Fast Segment Anything
[[`Paper`](https://arxiv.org/pdf/2306.12156.pdf)] [[`Web Demo`](https://huggingface.co/spaces/An-619/FastSAM)] [[`Colab demo`](https://colab.research.google.com/drive/1oX14f6IneGGw612WgVlAiy91UHwFAvr9?usp=sharing)] [[`Model Zoo`](#model-checkpoints)] [[`BibTeX`](#citing-fastsam)]

The **Fast Segment Anything Model(FastSAM)** is a CNN Segment Anything Model trained by only 2% of the SA-1B dataset published by SAM authors. The FastSAM achieve a comparable performance with
the SAM method at **50× higher run-time speed**.

## Installation
Clone the repository locally:
```
git clone https://github.com/CASIA-IVA-Lab/FastSAM.git
```
Create the conda env. The code requires `python>=3.7`, as well as `pytorch>=1.7` and `torchvision>=0.8`. Please follow the instructions [here](https://pytorch.org/get-started/locally/) to install both PyTorch and TorchVision dependencies. Installing both PyTorch and TorchVision with CUDA support is strongly recommended.
```
conda create -n FastSAM python=3.9
conda activate FastSAM
```
Install the packages:
```
cd FastSAM
pip install -r requirements.txt
```
Install clip:
```
pip install git+https://github.com/openai/CLIP.git
```
## <a name="GettingStarted"></a> Getting Started
First download a [model checkpoint](#model-checkpoints).
Then, you can run the scripts to try the everything mode and three prompt modes.
```
# Everything mode
python Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg
```
```
# text prompt
python Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg --text_prompt "the yellow dog"
```
```
# box prompt
python Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg --box_prompt [570,200,230,400]
```
```
# points prompt
python Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg --point_prompt "[[520,360],[620,300]]" --point_label "[1,0]"
```
You are also welcomed to try our Colab demo: [FastSAM_example.ipynb](https://colab.research.google.com/drive/1oX14f6IneGGw612WgVlAiy91UHwFAvr9?usp=sharing).
## Different Inference Options
We provide various options for different purposes, details are in [MORE_USAGES.md](MORE_USAGES.md).
## Web demo
In the [web demo](https://huggingface.co/spaces/An-619/FastSAM), you can upload your own image, select input size from 512~1024, and choose whether to visualize in high quality. High quality visualization additionally shows more easily observable split edges. The web demo only supports Everything Mode now, other modes will try to support in the future.
<!-- The [web demo](https://huggingface.co/spaces/An-619/FastSAM) can process your custom image using the Everything mode. -->

## <a name="Models"></a>Model Checkpoints
Two model versions of the model are available with different sizes. Click the links below to download the checkpoint for the corresponding model type.
- **`default` or `FastSAM`: [YOLOv8x based Segment Anything Model.](https://drive.google.com/file/d/1m1sjY4ihXBU1fZXdQ-Xdj-mDltW-2Rqv/view?usp=sharing)**
- `FastSAM-s`: [YOLOv8s based Segment Anything Model.](https://drive.google.com/file/d/10XmSj6mmpmRb8NhXbtiuO9cTTBwR_9SV/view?usp=sharing)
## Results
All result were tested on a single NVIDIA GeForce RTX 3090.
### 1. Inference time
Running Speed under Different Point Prompt Numbers(ms).
| method | params | 1 | 10 | 100 | E(16x16) | E(32x32*) | E(64x64) |
|:------------------:|:--------:|:-----:|:-----:|:-----:|:----------:|:-----------:|:----------:|
| SAM-H | 0.6G | 446 | 464 | 627 | 852 | 2099 | 6972 |
| SAM-B | 136M | 110 | 125 | 230 | 432 | 1383 | 5417 |
| FastSAM | 68M | 40 |40 | 40 | 40 | 40 | 40 |
### 2. Memory usage
| Dataset | Method | GPU Memory (MB) |
|:-----------:|:-----------------:|:-----------------------:|
| COCO 2017 | FastSAM | 2608 |
| COCO 2017 | SAM-H | 7060 |
| COCO 2017 | SAM-B | 4670 |
### 3. Zero-shot Transfer Experiments
#### Edge Detection
Test on the BSDB500 dataset.
|method | year| ODS | OIS | AP | R50 |
|:----------:|:-------:|:--------:|:--------:|:------:|:-----:|
| HED | 2015| .788 | .808 | .840 | .923 |
| SAM | 2023| .768 | .786 | .794 | .928 |
| FastSAM | 2023| .750 | .790 | .793 | .903 |
#### Object Proposals
##### COCO
|method | AR10 | AR100 | AR1000 | AUC |
|:---------------------------:|:------:|:-------:|--------:|:------:|
| SAM-H E64 | 15.5 | 45.6 | 67.7 | 32.1 |
| SAM-H E32 | 18.5 | 49.5 | 62.5 | 33.7 |
| SAM-B E32 | 11.4 | 39.6 | 59.1 | 27.3 |
| FastSAM | 15.7 | 47.3 | 63.7 | 32.2 |
##### LVIS
bbox AR@1000
| method | all | small | med. | large |
|:---------------:|:-----:|:------:|:-----:|:------:|
| ViTDet-H | 65.0 | 53.2 | 83.3 | 91.2 |
zero-shot transfer methods
| SAM-H E64 | 52.1 | 36.6 | 75.1 | 88.2 |
| SAM-H E32 | 50.3 | 33.1 | 76.2 | 89.8 |
| SAM-B E32 | 45.0 | 29.3 | 68.7 | 80.6 |
| FastSAM | 57.1 | 44.3 | 77.1 | 85.3 |
#### Instance Segmentation On COCO 2017
|method | AP | APS | APM | APL |
|:--------------:|:--------:|:--------:|:------:|:-----:|
| ViTDet-H | .510 | .320 | .543 | .689 |
| SAM | .465 | .308 | .510 | .617 |
| FastSAM | .379 | .239 | .434 | .500 |
### 4. Performance Visulization
Several segmentation results:
#### Natural Images

#### Text to Mask

### 5.Downstream tasks
The results of several downstream tasks to show the effectiveness.
#### Anomaly Detection

#### Salient Object Detection

#### Building Extracting

## License
The model is licensed under the [Apache 2.0 license](LICENSE).
## Acknowledgement
- [Segment Anything](https://segment-anything.com/) provides the SA-1B dataset and the base codes.
- [YOLOv8](https://github.com/ultralytics/ultralytics) provides codes and pre-trained models.
- [YOLACT](https://arxiv.org/abs/2112.10003) provides powerful instance segmentation method.
- [Grounded-Segment-Anything](https://huggingface.co/spaces/yizhangliu/Grounded-Segment-Anything) provides a useful web demo template.
## Citing FastSAM
If you find this project useful for your research, please consider citing the following BibTeX entry.
```
@misc{zhao2023fast,
title={Fast Segment Anything},
author={Xu Zhao and Wenchao Ding and Yongqi An and Yinglong Du and Tao Yu and Min Li and Ming Tang and Jinqiao Wang},
year={2023},
eprint={2306.12156},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!-- <p align="center">
<a href="https://star-history.com/#geekyutao/Inpaint-Anything&Date">
<img src="https://api.star-history.com/svg?repos=geekyutao/Inpaint-Anything&type=Date" alt="Star History Chart">
</a>
</p> -->
|
TheBloke/Flan-OpenLlama-7B-GGML
|
TheBloke
| 2023-06-22T12:28:59Z | 0 | 8 | null |
[
"license:other",
"region:us"
] | null | 2023-06-22T08:56:04Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Concept of Mind's Flan Open Llama 7B GGML
These files are GGML format model files for [Concept of Mind's Flan Open Llama 7B](https://huggingface.co/conceptofmind/Flan-Open-Llama-7b).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Flan-OpenLlama-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Flan-OpenLlama-7B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/conceptofmind/Flan-Open-Llama-7b)
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| flan-openllama-7b.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB | 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| flan-openllama-7b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB | 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| flan-openllama-7b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB | 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| flan-openllama-7b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB | 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| flan-openllama-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB | 6.29 GB | Original llama.cpp quant method, 4-bit. |
| flan-openllama-7b.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB | 6.71 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| flan-openllama-7b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB | 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| flan-openllama-7b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB | 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| flan-openllama-7b.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB | 7.13 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| flan-openllama-7b.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB | 7.56 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| flan-openllama-7b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB | 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| flan-openllama-7b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB | 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| flan-openllama-7b.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| flan-openllama-7b.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB | 9.66 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m flan-openllama-7b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Mano Prime, Fen Risland, Derek Yates, Preetika Verma, webtim, Sean Connelly, Alps Aficionado, Karl Bernard, Junyu Yang, Nathan LeClaire, Chris McCloskey, Lone Striker, Asp the Wyvern, Eugene Pentland, Imad Khwaja, trip7s trip, WelcomeToTheClub, John Detwiler, Artur Olbinski, Khalefa Al-Ahmad, Trenton Dambrowitz, Talal Aujan, Kevin Schuppel, Luke Pendergrass, Pyrater, Joseph William Delisle, terasurfer , vamX, Gabriel Puliatti, David Flickinger, Jonathan Leane, Iucharbius , Luke, Deep Realms, Cory Kujawski, ya boyyy, Illia Dulskyi, senxiiz, Johann-Peter Hartmann, John Villwock, K, Ghost , Spiking Neurons AB, Nikolai Manek, Rainer Wilmers, Pierre Kircher, biorpg, Space Cruiser, Ai Maven, subjectnull, Willem Michiel, Ajan Kanaga, Kalila, chris gileta, Oscar Rangel.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Concept of Mind's Flan Open Llama 7B
No original model card was provided.
|
evatan/alvan_dog
|
evatan
| 2023-06-22T12:22:20Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-22T11:54:43Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - evatan/alvan_dog
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
wordcab/whisper-large-int8-ru
|
wordcab
| 2023-06-22T12:21:04Z | 3 | 0 |
transformers
|
[
"transformers",
"ru",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-22T07:37:41Z |
---
license: apache-2.0
language:
- ru
---
This is a ctranslate2 int8 version of the [mitchelldehaven/whisper-large-v2-ru](https://huggingface.co/mitchelldehaven/whisper-large-v2-ru) model.
|
wordcab/whisper-large-int8-fp16-ru
|
wordcab
| 2023-06-22T12:20:32Z | 4 | 0 |
transformers
|
[
"transformers",
"ru",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-22T08:02:45Z |
---
license: apache-2.0
language:
- ru
---
This is a ctranslate2 int8 version of the [mitchelldehaven/whisper-large-v2-ru](https://huggingface.co/mitchelldehaven/whisper-large-v2-ru) model.
|
Tessro/ppo-LunarLander-v
|
Tessro
| 2023-06-22T12:02:28Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T12:01:35Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: abhi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.02 +/- 71.97
name: mean_reward
verified: false
---
# **abhi** Agent playing **LunarLander-v2**
This is a trained model of a **abhi** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jondurbin/airoboros-7b-gpt4-1.4-fp16
|
jondurbin
| 2023-06-22T11:51:25Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.4",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T10:46:32Z |
---
license: other
datasets:
- jondurbin/airoboros-gpt4-1.4
---
float16 version of https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.4
|
nathan-cai/Pixelcopter-PLE-v0
|
nathan-cai
| 2023-06-22T11:18:40Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T22:38:35Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 37.30 +/- 28.52
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
fireballoon/baichuan-vicuna-chinese-7b-gptq
|
fireballoon
| 2023-06-22T11:03:11Z | 8 | 17 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"zh",
"en",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"dataset:QingyiSi/Alpaca-CoT",
"dataset:mhhmm/leetcode-solutions-python",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-06-20T14:55:01Z |
---
language:
- zh
- en
pipeline_tag: text-generation
inference: false
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
- QingyiSi/Alpaca-CoT
- mhhmm/leetcode-solutions-python
---
# baichuan-vicuna-chinese-7b-gptq
[baichuan-vicuna-chinese-7b](https://huggingface.co/fireballoon/baichuan-vicuna-chinese-7b) quantized with [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ).
使用[AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ)量化的[baichuan-vicuna-chinese-7b](https://huggingface.co/fireballoon/baichuan-vicuna-chinese-7b)。使用7G显存实现模型推理。
# Quantize config
```
{
"bits": 4,
"group_size": 128,
"damp_percent": 0.01,
"desc_act": true,
"sym": true,
"true_sequential": true,
"model_name_or_path": null,
"model_file_base_name": null
}
```
# Inference
```ipython
>>> from transformers import AutoTokenizer, TextStreamer
>>> from auto_gptq import AutoGPTQForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("fireballoon/baichuan-vicuna-chinese-7b-gptq", use_fast=False)
>>> model = AutoGPTQForCausalLM.from_quantized("fireballoon/baichuan-vicuna-chinese-7b-gptq", device="cuda:0")
>>> streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
>>> instruction = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {} ASSISTANT:"
>>> prompt = instruction.format("How can I improve my time management skills?") # user message
>>> generate_ids = model.generate(input_ids=tokenizer(prompt, return_tensors='pt').input_ids.cuda(), max_new_tokens=2048, streamer=streamer)
'''
To improve your time management skills, you should first identify your priorities and set goals for each day. Then, create a schedule that allows you to allocate time to each task. Finally, prioritize your tasks and stick to your schedule.
'''
```
# Test cases on [Ziya-Eval-Chinese](https://huggingface.co/datasets/IDEA-CCNL/Ziya-Eval-Chinese)
```
Q: 小明有200元,买书花了66元,买文具花了12元,小明剩下的钱比原来少了多少元?
A: 小明剩下的钱比原来少了66+12=78元。
Q: 请翻译这段英文:Timmy was fined a large sum of money by the judge for drunken driving·
A: Timmy因醉驾被法官处以巨额罚款。
Q: 贝多芬和柴可夫斯基在风格上有什么差异吗
A: 贝多芬和柴可夫斯基都是古典音乐作曲家,但他们的风格在某些方面存在差异。贝多芬以其严肃、庄重、深沉的音乐风格而闻名,他的音乐常常具有内省和情感深度。柴可夫斯基则以其华丽、优雅、精致的音乐风格而闻名,他的音乐常常具有戏剧性和情感深度。贝多芬的音乐通常被描述为古典主义音乐,而柴可夫斯基的音乐则常常被描述为浪漫主义音乐。总的来说,贝多芬的音乐更具内省和情感深度,而柴可夫斯基的音乐则更具华丽和精致。
```
```
Q: 写一篇高中青涩的爱情故事,男主是“我”,女主叫小美。我在高一喜欢上了小美。 开始写吧
A: 我和我的女朋友小美是在高一的时候认识的。那是一个阳光明媚的午后,我正在教室里看书,突然听到有人喊我的名字。我转过头,看到小美站在我的面前,微笑着看着我。那一刻,我感觉整个世界都停止了,所有的目光都聚集在我和小美身上。
从那天起,我就开始关注小美。我发现她喜欢看漫画,我就买了很多漫画书送给她;我发现她喜欢听音乐,我就经常带她去听音乐会。我们经常一起去图书馆,一起去操场散步,一起去吃路边摊。
随着我们的关系越来越亲密,我开始意识到自己已经深深地爱上了小美。我每天都会给她写情书,给她送花,给她惊喜。我总是想方设法地逗她开心,让她笑。
但是小美似乎并不像我一样喜欢我。她经常拒绝我的邀请,对我的感情视而不见。我以为她只是害羞,所以更加努力地对她好,更加频繁地出现在她的面前。
但是无论我怎么努力,小美似乎都无法看到我的好。我以为她只是需要时间来接受我的感情,所以更加耐心地等待着。
但是等待并没有带来预期的结果。小美最终向我提出了分手,并拒绝了我的所有请求。我感到震惊和心痛,不知道为什么会这样。
我以为我们的爱情是坚不可摧的,但是小美却说它不够好。我感到失落和受伤,整整一个星期都没有说话。
但是随着时间的推移,我开始意识到小美的拒绝并不是我的失败,而是她的失败。我已经尽我所能去爱她,去关心她,但是她可能并不适合我。
我开始把注意力放在其他事情上,比如学习、运动和社交。我开始交新朋友,参加新的活动,甚至开始了一次新的冒险。
当我再次回头看时,我发现自己已经不再像以前那样爱小美了。我已经学会了接受她的拒绝,并继续前进。
我不知道未来会发生什么,但我知道一件事:我已经准备好迎接生活,无论它给我带来什么。
```
```
Q: 从零学习编程,请给我一个三个月的学习计划
A: 当然,这里有一个三个月的编程学习计划:
1. 从学习编程的基础知识开始,比如变量和循环。您可以通过阅读在线教程和书籍来完成此操作。
2. 接下来,学习编程的基本语法。这将帮助您编写代码并让其运行。
3. 一旦您对编程的基本语法有了很好的理解,开始学习编程的基本概念,比如条件语句和方法。
4. 接下来,学习编程中使用的不同编程语言,比如Python或Java。
5. 最后,通过完成小型项目来应用所学知识。这将帮助您将所学应用到实际问题中。
6. 定期练习,并尝试不同的编程语言和技术。
7. 不要害怕寻求帮助或向他人寻求反馈。
8. 耐心地尝试,不要害怕犯错。
9. 定期复习和练习,以巩固您的知识。
10. 保持对新技术和编程语言的最新趋势。
记住,学习编程是一个过程,需要时间和练习。坚持不懈,保持好奇心,您将成为一名熟练的编程人员。
```
```
Q: 实现一个Python函数以使用动态编程查找两个输入字符串的最长公共子序列
A: 以下是查找两个输入字符串的最长公共子序列的最简Python函数:
def longest_common_subsets(str1, str2):
m, n = len(str1), len(str2)
i, j = 0, 0
while i < m and j < n:
while str1[i] == str2[j]:
i += 1
j += 1
if i < m:
j += 1
else:
break
return j - i
该函数的工作原理是将两个输入字符串的长度 `m` 和 `n` 作为其变量 `i` 和 `j` 的计算代价来初始化。然后,循环开始从 `i` 和 `j` 两个索引开始。
两个字符串在循环过程中一直被比较,直到 `str1` 字符串中的某个字符不在 `str2` 字符串中。在这种情况下,循环终止,并执行上述操作,从 `i` 更新为输入字符串中较长字符串的索引(上步运行中较短字符串的索引)。
然后,程序检查当前 `i` 索引是否大于 `m`,如果是,则进行动态编程(循环将继续进行`j`次更新,以便使用最长公共子序列。
最后,函数返回`j`-`i`,这给出了输入字符串的最长公共子序列的长度,其中长序列长度`j`减去第一个可以访问的最长公共子序列的长度索引`i`。此值始终大于两个输入字符串的长度总和。
```
|
saikiranmaddukuri/autotrain-sample-text-classification-68910137538
|
saikiranmaddukuri
| 2023-06-22T11:01:56Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"deberta",
"text-classification",
"autotrain",
"en",
"dataset:saikiranmaddukuri/autotrain-data-sample-text-classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T11:01:30Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- saikiranmaddukuri/autotrain-data-sample-text-classification
co2_eq_emissions:
emissions: 0.00390882136996407
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 68910137538
- CO2 Emissions (in grams): 0.0039
## Validation Metrics
- Loss: 1.101
- Accuracy: 0.333
- Macro F1: 0.167
- Micro F1: 0.333
- Weighted F1: 0.167
- Macro Precision: 0.111
- Micro Precision: 0.333
- Weighted Precision: 0.111
- Macro Recall: 0.333
- Micro Recall: 0.333
- Weighted Recall: 0.333
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/saikiranmaddukuri/autotrain-sample-text-classification-68910137538
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("saikiranmaddukuri/autotrain-sample-text-classification-68910137538", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("saikiranmaddukuri/autotrain-sample-text-classification-68910137538", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
vg055/roberta-base-bne-finetuned-Tass2020
|
vg055
| 2023-06-22T10:41:53Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-22T10:40:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: roberta-base-bne-finetuned-Tass2020
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-Tass2020
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.951 | 1.0 | 15 | 3.4728 |
| 3.3715 | 2.0 | 30 | 2.9967 |
| 3.131 | 3.0 | 45 | 3.1550 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
bandrocks/my_awesome_eminem_clm-model
|
bandrocks
| 2023-06-22T10:34:10Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T08:53:40Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eminem_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eminem_clm-model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.4213 | 1.0 | 3840 | 1.3670 |
| 1.3347 | 2.0 | 7680 | 1.3081 |
| 1.3152 | 3.0 | 11520 | 1.2919 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
AIfenaike/CoQA-bloom-560m
|
AIfenaike
| 2023-06-22T10:26:41Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T10:26:39Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
ketankishore/finetune_llm_falcon7b
|
ketankishore
| 2023-06-22T10:25:10Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T10:25:07Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
A1abz/ppo-LunarLander-v2
|
A1abz
| 2023-06-22T10:22:41Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T10:22:21Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.90 +/- 20.06
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jnwprk/hate_detection_model
|
jnwprk
| 2023-06-22T10:18:17Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T09:42:59Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: hate_detection_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hate_detection_model
This model is a fine-tuned version of [sangrimlee/bert-base-multilingual-cased-nsmc](https://huggingface.co/sangrimlee/bert-base-multilingual-cased-nsmc) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2937
- Accuracy: 0.7686
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 62 | 0.4613 | 0.7834 |
| No log | 2.0 | 124 | 0.5033 | 0.7516 |
| No log | 3.0 | 186 | 0.4699 | 0.7898 |
| No log | 4.0 | 248 | 0.5516 | 0.7516 |
| No log | 5.0 | 310 | 0.6990 | 0.7219 |
| No log | 6.0 | 372 | 0.6500 | 0.7665 |
| No log | 7.0 | 434 | 0.7347 | 0.7856 |
| No log | 8.0 | 496 | 0.9104 | 0.7389 |
| 0.3218 | 9.0 | 558 | 0.7689 | 0.8153 |
| 0.3218 | 10.0 | 620 | 0.9496 | 0.7792 |
| 0.3218 | 11.0 | 682 | 0.9598 | 0.7707 |
| 0.3218 | 12.0 | 744 | 1.2402 | 0.7091 |
| 0.3218 | 13.0 | 806 | 1.1616 | 0.7537 |
| 0.3218 | 14.0 | 868 | 1.0903 | 0.7771 |
| 0.3218 | 15.0 | 930 | 1.3674 | 0.7304 |
| 0.3218 | 16.0 | 992 | 1.1962 | 0.7728 |
| 0.0623 | 17.0 | 1054 | 1.3640 | 0.7452 |
| 0.0623 | 18.0 | 1116 | 1.3093 | 0.7622 |
| 0.0623 | 19.0 | 1178 | 1.3108 | 0.7707 |
| 0.0623 | 20.0 | 1240 | 1.2937 | 0.7686 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Leukschrauber/Taxi-v3
|
Leukschrauber
| 2023-06-22T10:05:25Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T10:05:21Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Leukschrauber/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hts98/whisper-large-v2-paper_
|
hts98
| 2023-06-22T10:02:15Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-22T06:34:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-large-v2-paper_
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v2-paper_
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4133
- Wer: 47.7467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 1.0 | 143 | 0.3626 | 71.8596 |
| No log | 2.0 | 286 | 0.3398 | 50.4925 |
| No log | 3.0 | 429 | 0.3426 | 52.2600 |
| 0.3684 | 4.0 | 572 | 0.3541 | 46.2800 |
| 0.3684 | 5.0 | 715 | 0.3721 | 46.6114 |
| 0.3684 | 6.0 | 858 | 0.4133 | 47.7467 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.7.0
- Tokenizers 0.13.2
|
ammaradel/PSU-LLaMA-Inference
|
ammaradel
| 2023-06-22T09:59:05Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"pytorch",
"question-answering",
"en",
"region:us"
] |
question-answering
| 2023-06-15T07:55:31Z |
---
metrics:
- accuracy
- bertscore
- bleu
- bleurt
- brier_score
- cer
pipeline_tag: question-answering
language:
- en
library_name: adapter-transformers
---
LLaMA model fine-tuned on PSU Dataset 3 (8K lines)
|
hongrui/mammogram_v_2
|
hongrui
| 2023-06-22T09:30:37Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-22T00:26:34Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - hongrui/mammogram_v_2
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the hongrui/mammogram_v_1 dataset. You can find some example images in the following.




|
dhifanrazaqa/t5-end2end-questions-generation
|
dhifanrazaqa
| 2023-06-22T09:19:25Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:squad_modified_for_t5_qg",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-02T06:51:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_modified_for_t5_qg
model-index:
- name: t5-end2end-questions-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation
This model is a fine-tuned version of [muchad/idt5-base](https://huggingface.co/muchad/idt5-base) on the squad_modified_for_t5_qg dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8449
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.3156 | 0.34 | 100 | 2.2625 |
| 2.5509 | 0.67 | 200 | 2.0394 |
| 2.3619 | 1.01 | 300 | 1.9596 |
| 2.2501 | 1.34 | 400 | 1.9272 |
| 2.2 | 1.68 | 500 | 1.9074 |
| 2.1682 | 2.02 | 600 | 1.8882 |
| 2.1222 | 2.35 | 700 | 1.8893 |
| 2.0874 | 2.69 | 800 | 1.8722 |
| 2.0751 | 3.03 | 900 | 1.8656 |
| 2.0501 | 3.36 | 1000 | 1.8506 |
| 2.0338 | 3.7 | 1100 | 1.8491 |
| 2.0326 | 4.03 | 1200 | 1.8428 |
| 1.9992 | 4.37 | 1300 | 1.8445 |
| 2.0081 | 4.71 | 1400 | 1.8449 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
slimsha2dy/my_awesome_model
|
slimsha2dy
| 2023-06-22T09:18:32Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T09:07:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: test
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.925
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1855
- Accuracy: 0.925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2413 | 1.0 | 1000 | 0.2020 | 0.922 |
| 0.1451 | 2.0 | 2000 | 0.1855 | 0.925 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
sivateja-trustt/falcon7b
|
sivateja-trustt
| 2023-06-22T09:14:46Z | 0 | 1 | null |
[
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-06-22T08:37:34Z |
---
license: apache-2.0
language:
- en
---
|
hangjoni/ppo-LunarLander-v2
|
hangjoni
| 2023-06-22T09:13:02Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T09:11:42Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.66 +/- 16.73
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
heon98/my_awesome_eli5_clm-model
|
heon98
| 2023-06-22T09:06:30Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T08:41:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7432
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.871 | 1.0 | 1133 | 3.7614 |
| 3.778 | 2.0 | 2266 | 3.7469 |
| 3.7327 | 3.0 | 3399 | 3.7432 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Naonori/billsum_model_for_test
|
Naonori
| 2023-06-22T09:03:50Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-22T09:01:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: billsum_model_for_test
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1461
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# billsum_model_for_test
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4420
- Rouge1: 0.1461
- Rouge2: 0.0524
- Rougel: 0.121
- Rougelsum: 0.121
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.7503 | 0.1244 | 0.035 | 0.105 | 0.1052 | 19.0 |
| No log | 2.0 | 124 | 2.5250 | 0.1361 | 0.0455 | 0.1141 | 0.1144 | 19.0 |
| No log | 3.0 | 186 | 2.4594 | 0.1459 | 0.0523 | 0.1202 | 0.1202 | 19.0 |
| No log | 4.0 | 248 | 2.4420 | 0.1461 | 0.0524 | 0.121 | 0.121 | 19.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
dipesh1111/Redpajama-7b-chat-lora-wiseyak-retrained
|
dipesh1111
| 2023-06-22T09:02:48Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T09:02:15Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
user1251/soccer_finetuned_model_final4
|
user1251
| 2023-06-22T08:49:40Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T08:44:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: soccer_finetuned_model_final4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# soccer_finetuned_model_final4
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 189 | 3.6828 |
| No log | 2.0 | 378 | 3.6200 |
| 3.7068 | 3.0 | 567 | 3.6037 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Pranjal-666/ppo-Huggy
|
Pranjal-666
| 2023-06-22T08:42:34Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-22T08:42:29Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Pranjal-666/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.