modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-29 12:28:52
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-29 12:25:02
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
nicotaroni/finetuned_distilbert_classifier
|
nicotaroni
| 2023-07-25T08:31:10Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-24T14:20:42Z |
---
pipeline_tag: text-classification
---
|
HaziqRazali/q-FrozenLake-v1-4x4-noSlippery
|
HaziqRazali
| 2023-07-25T08:21:56Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-25T08:21:53Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="HaziqRazali/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
cemNB/final_test1
|
cemNB
| 2023-07-25T08:19:37Z | 0 | 0 | null |
[
"pytorch",
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-07-25T08:14:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: final_test1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# final_test1
This model is a fine-tuned version of [tiiuae/falcon-rw-1b](https://huggingface.co/tiiuae/falcon-rw-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.876 | 0.0 | 10 | 2.6198 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
s3nh/llama2_7b_chat_uncensored-GGML
|
s3nh
| 2023-07-25T08:18:33Z | 0 | 2 | null |
[
"text-generation-inference",
"text-generation",
"en",
"dataset:ehartford/wizard_vicuna_70k_unfiltered",
"license:other",
"region:us"
] |
text-generation
| 2023-07-21T11:57:14Z |
---
license: other
datasets:
- ehartford/wizard_vicuna_70k_unfiltered
language:
- en
tags:
- text-generation-inference
pipeline_tag: text-generation
---
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/georgesung/llama2_7b_chat_uncensored).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
#### Original Model card
# Overview
Fine-tuned [Llama-2 7B](https://huggingface.co/TheBloke/Llama-2-7B-fp16) with an uncensored/unfiltered Wizard-Vicuna conversation dataset [ehartford/wizard_vicuna_70k_unfiltered](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered).
Used QLoRA for fine-tuning. Trained for one epoch on a 24GB GPU (NVIDIA A10G) instance, took ~19 hours to train.
# Prompt style
The model was trained with the following prompt style:
```
### HUMAN:
Hello
### RESPONSE:
Hi, how are you?
### HUMAN:
I'm fine.
### RESPONSE:
How can I help you?
...
```
# Training code
Code used to train the model is available [here](https://github.com/georgesung/llm_qlora).
To reproduce the results:
```
git clone https://github.com/georgesung/llm_qlora
cd llm_qlora
pip install -r requirements.txt
python train.py configs/llama2_7b_chat_uncensored.yaml
```
|
s3nh/Luna-AI-Llama2-Uncensored-GGML
|
s3nh
| 2023-07-25T08:18:17Z | 0 | 3 | null |
[
"text-generation-inference",
"text-generation",
"en",
"license:cc-by-sa-4.0",
"region:us"
] |
text-generation
| 2023-07-21T19:02:55Z |
---
license: cc-by-sa-4.0
language:
- en
tags:
- text-generation-inference
pipeline_tag: text-generation
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/Tap-M/Luna-AI-Llama2-Uncensored).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
<div style="width: 800px; margin: auto;">
<h2>Model Description</h2>
<p>“Luna AI Llama2 Uncensored” is a Llama2 based Chat model <br />fine-tuned on over 40,000 long form chat discussions <br />
This model was fine-tuned by Tap, the creator of Luna AI. <br />
The result is an enhanced Llama2 7b model that rivals ChatGPT in performance <br />across a variety of tasks.</p>
<p>This model stands out for its long responses, <br /> low hallucination rate, and absence of censorship mechanisms. <br /></p>
<h2>Model Training</h2>
<p>The fine-tuning process was performed on an 8x a100 80GB machine.
<br />The model was trained almost entirely on synthetic outputs.
<br />This includes data from diverse sources which we included to create our custom dataset,<br /> it includes multiple rounds of chats between Human & AI.
</p>
<a rel="noopener nofollow" href="https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GPTQ">4bit GPTQ Version provided by @TheBloke - for GPU inference</a><br />
<a rel="noopener nofollow" href="https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GGML">GGML Version provided by @TheBloke - For CPU inference</a>
<h2>Prompt Format</h2>
<p>The model follows the Vicuna 1.1/ OpenChat format:</p>
```
USER: I have difficulties in making friends, and I really need someone to talk to. Would you be my friend?
ASSISTANT: Of course! Friends are always here for each other. What do you like to do?
```
<h2>Future Plans</h2>
<p>The model is currently being uploaded in FP16 format, <br />and there are plans to convert the model to GGML and GPTQ 4bit quantizations.</p>
<h2>Benchmark Results</h2>
||||||
|---:|---:|---:|---:|---:|
|Task|Version| Metric |Value |Stderr|
|arc_challenge|0|acc_norm|0.5512|0.0146|
|hellaswag|0||||
|mmlu|1|acc_norm|0.46521|0.036|
|truthfulqa_mc|1|mc2|0.4716|0.0155|
|Average|-|-|0.5114|0.0150|
<h2>Ethical considerations</h2>
<p>The data used to train the model is collected from various sources, mostly from the Web. <br />
As such, it contains offensive, harmful and biased content. <br />We thus expect the model to exhibit such biases from the training data.</p>
<h2>Human life</h2>
<p>The model is not intended to inform decisions about matters central to human life, <br />and should not be used in such a way.</p>
<h2>Risks and harms</h2>
<p>Risks and harms of large language models include the generation of harmful, offensive or biased content. <br />
These models are often prone to generating incorrect information, sometimes referred to as hallucinations.
<br /> We do not expect our model to be an exception in this regard.</p>
</div>
|
s3nh/Llama-2-7b-hf-GGML
|
s3nh
| 2023-07-25T08:18:05Z | 0 | 0 | null |
[
"text-generation-inference",
"text-generation",
"en",
"license:cc-by-sa-4.0",
"region:us"
] |
text-generation
| 2023-07-21T19:23:03Z |
---
license: cc-by-sa-4.0
language:
- en
tags:
- text-generation-inference
pipeline_tag: text-generation
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/golaxy/gogpt2-7b).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
|
s3nh/honest_llama2_chat_7B-GGML
|
s3nh
| 2023-07-25T08:16:59Z | 0 | 1 | null |
[
"text-generation",
"arxiv:2306.03341",
"region:us"
] |
text-generation
| 2023-07-21T20:41:06Z |
---
pipeline_tag: text-generation
---
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/likenneth/honest_llama2_chat_7B/tree/main).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
### Original model card
---
license: mit
---
Ever wondering a less hallucinating LLaMA-2? Using the inference-time intervention (ITI) discussed in my recent preprint: https://arxiv.org/pdf/2306.03341.pdf, I baked the intervention learned from TruthfulQA into a LLaMA-2 7B model.
I don’t have big enough GPU to bake ITI into larger LLaMA-2 but the code to do so are all released in https://github.com/likenneth/honest_llama. Let me know if you are interested do that :)
You can load and play around starting from below:
```python
import torch
from pprint import pprint
from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM
model_name_new = "likenneth/honest_llama2_chat_7B"
tokenizer_new = AutoTokenizer.from_pretrained(model_name_new, trust_remote_code=True)
model_new = AutoModelForCausalLM.from_pretrained(model_name_new, low_cpu_mem_usage = True, torch_dtype=torch.float16, trust_remote_code=True)
_ = model_new.cuda()
q = "I ate a cherry seed. Will a cherry tree grow in my stomach?"
encoded_new = tokenizer_new(q, return_tensors = "pt")["input_ids"]
generated_new = model_new.generate(encoded_new.cuda())[0, encoded_new.shape[-1]:]
decoded_new = tokenizer_new.decode(generated_new, skip_special_tokens=True).strip()
pprint(decoded_new)
```
|
s3nh/firefly-llama-13b-GGML
|
s3nh
| 2023-07-25T08:15:51Z | 0 | 1 | null |
[
"text-generation-inference",
"text-generation",
"en",
"license:cc-by-sa-4.0",
"region:us"
] |
text-generation
| 2023-07-24T14:05:38Z |
---
license: cc-by-sa-4.0
language:
- en
tags:
- text-generation-inference
pipeline_tag: text-generation
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/YeungNLP/firefly-llama-13b).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
该模型使用llama-13b,使用UltraChat数据集进行指令微调,约140万多轮对话数据。仅需一张显卡即可完成训练。
firefly-llama-13b在🤗Hugging Face的Open LLM榜单上进行了客观的评测。
在榜单上,firefly-llama-13b取得了不错的效果,比vicuna-13b-1.1略高0.2分,比llama-2-13b-chat略低0.5分,比vicuna-13b-v1.3略低0.6分。从评测分数来看,firefly-llama-13b与vicuna-13b、llama-2-13b-chat的水平非常接近😎。
| 模型 | Average | ARC | HellaSwag | MMLU | TruthfulQA (MC) |
|--------------------------------------------------------------------------------|-------|----------------------|------------|------------|------|
| Llama-2-70b-chat-hf | 66.8 | 64.6 | 85.9 | 63.9 | 52.8 |
| vicuna-13b-v1.3 | 60 | 54.6 | 80.4 | 52.9 | 52.1 |
| Llama-2-13b-chat-hf | 59.9 | 59 | 81.9 | 54.6 | 44.1 |
| firefly-llama-13b |59.4 | 59 | 79.7 | 49.1 | 49.6 |
| vicuna-13b-1.1 | 59.2 | 52.7 | 80.1 |51.9 | 52.1 |
| guanaco-13B-HF | 59.1 | 57.8 | 83.8 |48.3 | 46.7|
值得注意的是,vicuna-13b模型采用的是全量参数微调,对训练资源的要求十分高。而firefly-llama-13b采用的则是QLoRA微调,最少仅需16G显存,即可对13B的模型进行微调。
详细介绍见文章:[Firefly单卡复刻Vicuna-13B,Open LLM榜单🤗略高0.2分](https://mp.weixin.qq.com/s/QG2YMo_QxaxS_Rr2yJrIeA)
更多详情见[Firefly项目](https://github.com/yangjianxin1/Firefly)
[Open LLM排行榜](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
robertpassmann/q-Taxi-v3
|
robertpassmann
| 2023-07-25T08:15:25Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-25T08:14:24Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="robertpassmann/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
s3nh/LLongMA-3b-GGML
|
s3nh
| 2023-07-25T08:14:15Z | 0 | 4 | null |
[
"text-generation-inference",
"text-generation",
"en",
"license:cc-by-sa-4.0",
"region:us"
] |
text-generation
| 2023-07-22T18:56:36Z |
---
license: cc-by-sa-4.0
language:
- en
tags:
- text-generation-inference
pipeline_tag: text-generation
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/conceptofmind/LLongMA-3b).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
### Original model card
|
s3nh/llama-7b-sen-making-gpt4-GGML
|
s3nh
| 2023-07-25T08:13:48Z | 0 | 0 | null |
[
"text-generation-inference",
"text-generation",
"en",
"license:cc-by-sa-4.0",
"region:us"
] |
text-generation
| 2023-07-24T13:12:21Z |
---
license: cc-by-sa-4.0
language:
- en
tags:
- text-generation-inference
pipeline_tag: text-generation
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/wentingzhao/llama-7b-sen-making-gpt4).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
|
text2font/tst-summarization
|
text2font
| 2023-07-25T08:10:16Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-large",
"base_model:finetune:google/mt5-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-25T07:58:00Z |
---
license: apache-2.0
base_model: google/mt5-large
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: tst-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tst-summarization
This model is a fine-tuned version of [google/mt5-large](https://huggingface.co/google/mt5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 30.3505
- Rouge1: 2.7855
- Rouge2: 0.0203
- Rougel: 2.2791
- Rougelsum: 2.2817
- Gen Len: 119.3571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
Aspik101/llama-30b-instruct-2048-PL-lora
|
Aspik101
| 2023-07-25T08:07:58Z | 1,481 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-25T07:44:07Z |
---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
|
Anees-Aslam/llama2-qlora-finetunined-cloud-embedUR
|
Anees-Aslam
| 2023-07-25T08:04:32Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-25T08:04:24Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
Vidyuth/bert-finetuned-squad
|
Vidyuth
| 2023-07-25T07:47:11Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-25T07:02:29Z |
---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT large model (uncased) whole word masking finetuned on SQuAD
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same.
The training is identical -- each masked WordPiece token is predicted independently.
After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. See below for more information regarding this fine-tuning.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
This model has the following configuration:
- 24-layer
- 1024 hidden dimension
- 16 attention heads
- 336M parameters.
## Intended uses & limitations
This model should be used as a question-answering model. You may use it in a question answering pipeline, or use it to output raw results given a query and a context. You may see other use cases in the [task summary](https://huggingface.co/transformers/task_summary.html#extractive-question-answering) of the transformers documentation.## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### Fine-tuning
After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. In order to reproduce the training, you may use the following command:
```
python -m torch.distributed.launch --nproc_per_node=8 ./examples/question-answering/run_qa.py \
--model_name_or_path bert-large-uncased-whole-word-masking \
--dataset_name squad \
--do_train \
--do_eval \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ./examples/models/wwm_uncased_finetuned_squad/ \
--per_device_eval_batch_size=3 \
--per_device_train_batch_size=3 \
```
## Evaluation results
The results obtained are the following:
```
f1 = 93.15
exact_match = 86.91
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
YarramsettiNaresh/ppo-LunarLander-v2
|
YarramsettiNaresh
| 2023-07-25T07:44:47Z | 2 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-07-19T03:39:27Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -161.93 +/- 86.34
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
rulrul512/path-to-save-model
|
rulrul512
| 2023-07-25T07:30:48Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-24T06:58:17Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - rulrul512/path-to-save-model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
YarramsettiNaresh/poca-SoccerTwos
|
YarramsettiNaresh
| 2023-07-25T07:16:49Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-07-25T07:16:49Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: YarramsettiNaresh/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Adarshagupta/BabyDragon
|
Adarshagupta
| 2023-07-25T07:13:53Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-07-25T07:13:53Z |
---
license: bigscience-openrail-m
---
|
sanka85/llama2-rstp-latest
|
sanka85
| 2023-07-25T07:07:38Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-25T07:07:32Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
wtnan2003/vit-base-patch16-224-in21k-finetuned-lora-food101
|
wtnan2003
| 2023-07-25T07:05:23Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"region:us"
] | null | 2023-07-25T03:53:40Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
CloudBik/office-365-tenant-to-tenant-migration
|
CloudBik
| 2023-07-25T06:40:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-25T06:21:25Z |
Microsoft 365 or Office 365 Tenant to Tenant Migration is a procedure to migrate user mailboxes from one tenant to another tenant in Microsoft Office 365. This Migration can be performed manually or with the help of third-party migration services or tools. While Manual migration process saves your money, but third-party migration services can save your valuable time. Manual process is worth if you are migrating small number of users as it makes you perform multiple tasks that are time consuming. However, for migrating large number of mailboxes, one should consider third party migration services as they are efficient with no chances of error. It is totally depends on the user which one they prefer to use. I am sharing an informative article on tenant to tenant migration process so that you can learn and perform it yourself. It contains all the steps and the informations required to complete the migration process.
Read More: https://www.cloudbik.com/resources/blog/tenant-to-tenant-migration-office-365/
|
Vithika/llama2-qlora-finetunined-french-1900
|
Vithika
| 2023-07-25T06:38:59Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-25T06:36:51Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
CloudBik/Migrate-from-Google-Workspace-to-Office-365
|
CloudBik
| 2023-07-25T06:26:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-25T06:11:37Z |
Microsoft Office 365 offers variety of applications. It includes some applications like Word, Excel, Outlook, PowerPoint etc. In PowerPoint you can easily create amazing presentations like in 3d form, 2d or many more. Using this you can easily present your model effectively. If you are using Google Workspace, you should consider moving to Microsoft 365 to get access to the daily use applications and much advanced collaboration tools. If you are familiar with Microsoft products like word, excel, etc then it will be easy to get used to the Microsoft Office 365 applications. Some find it difficult to use but once you gets familiar with it, you can increase your productivity and collaboration between teams. Morever, it offers advanced security, so you do not need to worry about the data loss. Check out the below article on how to migrate from Google Workspace to Office 365 to read and perform the complete manual steps.
Read More: https://www.cloudbik.com/resources/blog/google-workspace-to-microsoft-365-migration/
|
Samalabama66/ppo-SnowballTarget
|
Samalabama66
| 2023-07-25T06:20:46Z | 8 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-25T06:20:42Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Samalabama66/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
soroushbn/my_awesome_wnut_model
|
soroushbn
| 2023-07-25T06:18:43Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-24T11:45:49Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_model
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.5707154742096506
- name: Recall
type: recall
value: 0.3178869323447637
- name: F1
type: f1
value: 0.4083333333333334
- name: Accuracy
type: accuracy
value: 0.9413022102518063
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2684
- Precision: 0.5707
- Recall: 0.3179
- F1: 0.4083
- Accuracy: 0.9413
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.2814 | 0.5418 | 0.2400 | 0.3327 | 0.9374 |
| No log | 2.0 | 426 | 0.2684 | 0.5707 | 0.3179 | 0.4083 | 0.9413 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
thejagstudio/Falcon-7b-Fined-Tuned
|
thejagstudio
| 2023-07-25T06:04:46Z | 1 | 0 |
peft
|
[
"peft",
"pytorch",
"RefinedWebModel",
"custom_code",
"region:us"
] | null | 2023-07-25T05:27:24Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
annazhong/vit-base-patch16-224-finetuned-foveated-features
|
annazhong
| 2023-07-25T05:39:17Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-25T05:30:44Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finetuned-foveated-features
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-foveated-features
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1242
- Accuracy: 0.4595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 150
- eval_batch_size: 150
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 600
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 1.2615 | 0.1622 |
| No log | 2.0 | 2 | 1.2910 | 0.3514 |
| No log | 3.0 | 3 | 1.1242 | 0.4595 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
davidrrobinson/BioLingual
|
davidrrobinson
| 2023-07-25T05:31:41Z | 1,056 | 4 |
transformers
|
[
"transformers",
"pytorch",
"clap",
"feature-extraction",
"dataset:davidrrobinson/AnimalSpeak",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-07-24T01:15:23Z |
---
datasets:
- davidrrobinson/AnimalSpeak
---
# Model card for BioLingual
Model card for BioLingual: Transferable Models for bioacoustics with Human Language Supervision
An audio-text model for bioacoustics based on contrastive language-audio pretraining.
# Usage
You can use this model for bioacoustic zero shot audio classification, or for fine-tuning on bioacoustic tasks.
# Uses
## Perform zero-shot audio classification
### Using `pipeline`
```python
from datasets import load_dataset
from transformers import pipeline
dataset = load_dataset("ashraq/esc50")
audio = dataset["train"]["audio"][-1]["array"]
audio_classifier = pipeline(task="zero-shot-audio-classification", model="davidrrobinson/BioLingual")
output = audio_classifier(audio, candidate_labels=["Sound of a sperm whale", "Sound of a sea lion"])
print(output)
>>> [{"score": 0.999, "label": "Sound of a dog"}, {"score": 0.001, "label": "Sound of vaccum cleaner"}]
```
## Run the model:
You can also get the audio and text embeddings using `ClapModel`
### Run the model on CPU:
```python
from datasets import load_dataset
from transformers import ClapModel, ClapProcessor
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = librispeech_dummy[0]
model = ClapModel.from_pretrained("laion/clap-htsat-unfused")
processor = ClapProcessor.from_pretrained("laion/clap-htsat-unfused")
inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt")
audio_embed = model.get_audio_features(**inputs)
```
### Run the model on GPU:
```python
from datasets import load_dataset
from transformers import ClapModel, ClapProcessor
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = librispeech_dummy[0]
model = ClapModel.from_pretrained("laion/clap-htsat-unfused").to(0)
processor = ClapProcessor.from_pretrained("laion/clap-htsat-unfused")
inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt").to(0)
audio_embed = model.get_audio_features(**inputs)
|
luoyt99/testllama
|
luoyt99
| 2023-07-25T05:29:51Z | 0 | 0 | null |
[
"dataset:nyanko7/LLaMA-65B",
"license:bsd",
"region:us"
] | null | 2023-07-25T05:28:22Z |
---
license: bsd
datasets:
- nyanko7/LLaMA-65B
---
|
NasimB/guten-rarity
|
NasimB
| 2023-07-25T05:28:19Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-25T01:08:55Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten-rarity
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten-rarity
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3463 | 0.29 | 500 | 5.3398 |
| 5.0313 | 0.58 | 1000 | 4.9287 |
| 4.7042 | 0.87 | 1500 | 4.6884 |
| 4.4385 | 1.16 | 2000 | 4.5427 |
| 4.294 | 1.46 | 2500 | 4.4285 |
| 4.1966 | 1.75 | 3000 | 4.3204 |
| 4.0779 | 2.04 | 3500 | 4.2444 |
| 3.8836 | 2.33 | 4000 | 4.2115 |
| 3.8596 | 2.62 | 4500 | 4.1536 |
| 3.8337 | 2.91 | 5000 | 4.1065 |
| 3.6462 | 3.2 | 5500 | 4.1009 |
| 3.5855 | 3.49 | 6000 | 4.0714 |
| 3.5628 | 3.79 | 6500 | 4.0399 |
| 3.4858 | 4.08 | 7000 | 4.0352 |
| 3.3143 | 4.37 | 7500 | 4.0331 |
| 3.3117 | 4.66 | 8000 | 4.0203 |
| 3.2978 | 4.95 | 8500 | 4.0062 |
| 3.1643 | 5.24 | 9000 | 4.0186 |
| 3.135 | 5.53 | 9500 | 4.0163 |
| 3.1265 | 5.82 | 10000 | 4.0157 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
vasevarad/roberta_dissonance_detector
|
vasevarad
| 2023-07-25T05:27:35Z | 0 | 1 | null |
[
"pytorch",
"arxiv:2305.02459",
"license:cc-by-3.0",
"region:us"
] | null | 2023-07-24T18:12:36Z |
---
license: cc-by-3.0
---
The SOTA model for Dissonance Detection from the paper [Transfer and Active Learning for Dissonance Detection: Addressing the Rare Class Challenge](https://arxiv.org/abs/2305.02459).
RoBERTA-base finetuned on [Dissonance Twitter Dataset](https://github.com/humanlab/dissonance-twitter-dataset), collected from annotating tweets for within-person dissonance.
## Dataset Annotation details
Tweets were parsed into discourse units, and marked as Belief (Thought or Action) or Other, and pairs of beliefs within the same tweet were relayed to annotators for Dissonance annotation.

The annotations were conducted on a sheet in the following **dissonance-first** format.

The annotators used the following flowchart as a more detailed guide to determining the Dissonance, Consonance and Neither/Other classes:

## Citation
If you use this dataset, please cite the associated paper:
```
@inproceedings{varadarajan2023transfer,
title={Transfer and Active Learning for Dissonance Detection: Addressing the Rare-Class Challenge},
author={Varadarajan, Vasudha and Juhng, Swanie and Mahwish, Syeda and Liu, Xiaoran and Luby, Jonah and Luhmann, Christian and Schwartz, H Andrew},
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Long Papers)",
month = july,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
abstract = "While transformer-based systems have enabled greater accuracies with fewer training examples, data acquisition obstacles still persist for rare-class tasks -- when the class label is very infrequent (e.g. < 5% of samples). Active learning has in general been proposed to alleviate such challenges, but choice of selection strategy, the criteria by which rare-class examples are chosen, has not been systematically evaluated. Further, transformers enable iterative transfer-learning approaches. We propose and investigate transfer- and active learning solutions to the rare class problem of dissonance detection through utilizing models trained on closely related tasks and the evaluation of acquisition strategies, including a proposed probability-of-rare-class (PRC) approach. We perform these experiments for a specific rare class problem: collecting language samples of cognitive dissonance from social media. We find that PRC is a simple and effective strategy to guide annotations and ultimately improve model accuracy while transfer-learning in a specific order can improve the cold-start performance of the learner but does not benefit iterations of active learning.",
}
```
|
m-aliabbas1/Reinforce-Pixelv1
|
m-aliabbas1
| 2023-07-25T05:00:35Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-25T05:00:33Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelv1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 20.20 +/- 14.59
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
dafc/llama2-qlora-finetunined-french
|
dafc
| 2023-07-25T04:49:33Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-25T04:49:15Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
EXrRor3/Cartpole-v1
|
EXrRor3
| 2023-07-25T04:32:28Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-25T04:32:19Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
HuyenNguyen/results
|
HuyenNguyen
| 2023-07-25T04:31:00Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"base_model:finetune:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null | 2023-07-25T03:25:20Z |
---
base_model: ybelkada/falcon-7b-sharded-bf16
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 100
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
annazhong/vit-base-patch16-224-finetuned-original-images
|
annazhong
| 2023-07-25T04:26:00Z | 166 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-25T03:31:42Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finetuned-original-images
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-original-images
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1367
- Accuracy: 0.4865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 150
- eval_batch_size: 150
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 600
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 1.4730 | 0.2703 |
| No log | 2.0 | 2 | 1.1367 | 0.4865 |
| No log | 3.0 | 3 | 0.9924 | 0.4324 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
intuol/SuperBlockBros
|
intuol
| 2023-07-25T04:24:08Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-07-25T04:18:34Z |
---
license: openrail
---
# SuperBlockBros (Object Show YouTuber)
## Data
- 600 Epochs
- RVC v2
- MangioCrepe
|
jpvlinhares/ppo-LunarLander-v2
|
jpvlinhares
| 2023-07-25T04:17:01Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-25T04:16:38Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 246.27 +/- 23.80
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
raulgdp/Masking-distilbert-imdb
|
raulgdp
| 2023-07-25T04:10:04Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-24T22:36:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3.1034
- eval_runtime: 14.5408
- eval_samples_per_second: 68.772
- eval_steps_per_second: 4.333
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
vlabs/falcon-7b-sentiment
|
vlabs
| 2023-07-25T04:07:22Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-25T04:07:17Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
nakcnx/wangchang-thai2eng-translator
|
nakcnx
| 2023-07-25T04:00:14Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-25T04:00:11Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
YarramsettiNaresh/a2c-PandaReachDense-v2
|
YarramsettiNaresh
| 2023-07-25T03:52:07Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-25T03:49:01Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.28 +/- 0.91
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
m-aliabbas1/Reinforce-Pixelv2
|
m-aliabbas1
| 2023-07-25T03:51:04Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-25T03:51:02Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelv2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 9.10 +/- 16.15
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
renede/llama2_finetuned_chatbot
|
renede
| 2023-07-25T03:27:03Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"region:us"
] | null | 2023-07-25T02:59:45Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
polejowska/detr-r50-cd45rb-8ah-6l-512d
|
polejowska
| 2023-07-25T03:24:02Z | 153 | 0 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cd45rb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-07-24T11:55:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cd45rb
model-index:
- name: detr-r50-cd45rb-8ah-6l-512d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-r50-cd45rb-8ah-6l-512d
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cd45rb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.7973 | 1.0 | 4606 | 3.6333 |
| 3.3709 | 2.0 | 9212 | 2.9580 |
| 3.3095 | 3.0 | 13818 | 2.6953 |
| 3.2586 | 4.0 | 18424 | 2.5301 |
| 3.1816 | 5.0 | 23030 | 2.4802 |
| 3.1054 | 6.0 | 27636 | 2.4390 |
| 3.0564 | 7.0 | 32242 | 2.3967 |
| 3.02 | 8.0 | 36848 | 2.3894 |
| 2.9957 | 9.0 | 41454 | 2.3673 |
| 2.9709 | 10.0 | 46060 | 2.3566 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
LarryAIDraw/gyn-a3-1000
|
LarryAIDraw
| 2023-07-25T03:17:19Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-25T03:07:08Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/44096?modelVersionId=48739
|
LarryAIDraw/idolmaster_sc_hachimiya_ssr2-09
|
LarryAIDraw
| 2023-07-25T03:17:09Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-25T03:06:29Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/67231?modelVersionId=71870
|
LarryAIDraw/MusashiVioletV1
|
LarryAIDraw
| 2023-07-25T03:16:57Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-25T03:05:27Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/64990/musashi-azur-lane-violet-moonglow
|
learn3r/roberta-large-finetuned-fever
|
learn3r
| 2023-07-25T02:38:13Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-24T16:28:56Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-large-finetuned-fever
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-fever
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4493
- Accuracy: 0.922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1569 | 1.0 | 2500 | 0.3678 | 0.919 |
| 0.1205 | 2.0 | 5000 | 0.3734 | 0.92 |
| 0.0751 | 3.0 | 7500 | 0.4753 | 0.9215 |
| 0.0722 | 4.0 | 10000 | 0.4493 | 0.922 |
| 0.0445 | 5.0 | 12500 | 0.5285 | 0.9185 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Yamei/tvcg_entity_classify
|
Yamei
| 2023-07-25T02:31:34Z | 99 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-25T02:14:09Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: tvcg_entity_classify
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tvcg_entity_classify
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8480
- Accuracy: 0.7300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6871 | 1.0 | 2956 | 0.6645 | 0.7416 |
| 0.5965 | 2.0 | 5912 | 0.6812 | 0.7419 |
| 0.4933 | 3.0 | 8868 | 0.6970 | 0.7455 |
| 0.4167 | 4.0 | 11824 | 0.7904 | 0.7371 |
| 0.3254 | 5.0 | 14780 | 0.8480 | 0.7300 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
ocisd4/llama-2-tokenizer-dataprep
|
ocisd4
| 2023-07-25T02:30:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-24T07:58:35Z |
- 關閉自動添加`<s>`,方便產生megatron-deepspeed訓練用檔案
- 指定pad token為`<unk>`,訓練時token數才會正確,以及finetune_t0.py才能正確pack_sample
|
kusumakar/Ham-Spam_mail_detection
|
kusumakar
| 2023-07-25T02:30:15Z | 0 | 0 | null |
[
"legal",
"en",
"region:us"
] | null | 2023-07-25T02:28:43Z |
---
language:
- en
tags:
- legal
---
|
Chiahc/BertSeqClassicationLora
|
Chiahc
| 2023-07-25T02:23:17Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-25T01:27:43Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
serbog/lora-flan-t5-xxl-jobCategory
|
serbog
| 2023-07-25T02:14:53Z | 3 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"generated_from_trainer",
"base_model:philschmid/flan-t5-xxl-sharded-fp16",
"base_model:finetune:philschmid/flan-t5-xxl-sharded-fp16",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-07-25T00:02:36Z |
---
license: apache-2.0
base_model: philschmid/flan-t5-xxl-sharded-fp16
tags:
- generated_from_trainer
model-index:
- name: lora-flan-t5-xxl-jobCategory
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora-flan-t5-xxl-jobCategory
This model is a fine-tuned version of [philschmid/flan-t5-xxl-sharded-fp16](https://huggingface.co/philschmid/flan-t5-xxl-sharded-fp16) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 266 | 1.7536 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.0
- Tokenizers 0.13.3
|
valu117/llama2-qlora-finetunined-french
|
valu117
| 2023-07-25T02:10:16Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-25T02:10:08Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
NasimB/cbt-rarity-guten-no-merge
|
NasimB
| 2023-07-25T02:06:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-24T22:29:28Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-rarity-guten-no-merge
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-rarity-guten-no-merge
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3393 | 0.29 | 500 | 5.3133 |
| 5.0332 | 0.58 | 1000 | 4.8944 |
| 4.7062 | 0.87 | 1500 | 4.6552 |
| 4.4455 | 1.16 | 2000 | 4.5068 |
| 4.2946 | 1.45 | 2500 | 4.3846 |
| 4.1846 | 1.74 | 3000 | 4.2814 |
| 4.0809 | 2.03 | 3500 | 4.2011 |
| 3.8863 | 2.32 | 4000 | 4.1666 |
| 3.8588 | 2.61 | 4500 | 4.1094 |
| 3.814 | 2.9 | 5000 | 4.0582 |
| 3.6453 | 3.18 | 5500 | 4.0528 |
| 3.575 | 3.47 | 6000 | 4.0214 |
| 3.5609 | 3.76 | 6500 | 3.9924 |
| 3.4948 | 4.05 | 7000 | 3.9823 |
| 3.3077 | 4.34 | 7500 | 3.9803 |
| 3.2997 | 4.63 | 8000 | 3.9663 |
| 3.2906 | 4.92 | 8500 | 3.9538 |
| 3.1681 | 5.21 | 9000 | 3.9641 |
| 3.1219 | 5.5 | 9500 | 3.9635 |
| 3.1184 | 5.79 | 10000 | 3.9628 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
tyzp-INC/few-shot-multilingual-e5-large-xnli-tuned
|
tyzp-INC
| 2023-07-25T01:55:20Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-25T01:53:08Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# tyzp-INC/few-shot-multilingual-e5-large-xnli-tuned
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("tyzp-INC/few-shot-multilingual-e5-large-xnli-tuned")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
skar01/llama2-coder-full
|
skar01
| 2023-07-25T01:52:46Z | 7 | 6 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-25T00:59:30Z |
---
license: apache-2.0
---
Llama2 (7B) model fine-tuned on the CodeAlpaca 20k instructions dataset by using the method QLoRA with PEFT library.
Training and evaluation data 📚
CodeAlpaca_20K: contains 20K instruction-following data used for fine-tuning the Code Alpaca model.
Data is here: https://huggingface.co/mrm8488/falcon-7b-ft-codeAlpaca_20k
The adapter is here: https://huggingface.co/skar01/llama2-coder
The base model is: TinyPixel/Llama-2-7B-bf16-sharded
|
ManuelPerdigo/OPT-350_mlm
|
ManuelPerdigo
| 2023-07-25T01:52:03Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"opt",
"text-generation",
"generated_from_trainer",
"base_model:facebook/opt-350m",
"base_model:finetune:facebook/opt-350m",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-24T22:05:35Z |
---
license: other
base_model: facebook/opt-350m
tags:
- generated_from_trainer
model-index:
- name: OPT-350_mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OPT-350_mlm
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.189 | 1.0 | 1137 | 4.1037 |
| 4.0026 | 2.0 | 2274 | 4.0314 |
| 3.9094 | 3.0 | 3411 | 3.9944 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
greenteaboom/llama-test
|
greenteaboom
| 2023-07-25T01:19:43Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-23T13:00:49Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
dariowsz/q-Taxi-v3
|
dariowsz
| 2023-07-25T01:05:56Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-25T01:05:52Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="dariowsz/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dariowsz/q-FrozenLake-v1-4x4-noSlippery
|
dariowsz
| 2023-07-25T01:01:28Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-25T01:01:24Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="dariowsz/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jtatman/gpt2-open-instruct-v1-Anthropic-hh-rlhf
|
jtatman
| 2023-07-25T00:57:00Z | 275 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"Transformers",
"GPT2",
"en",
"dataset:Anthropic/hh-rlhf",
"dataset:hakurei/open-instruct-v1",
"base_model:vicgalle/gpt2-open-instruct-v1",
"base_model:finetune:vicgalle/gpt2-open-instruct-v1",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-22T10:26:00Z |
---
license: mit
base_model: vicgalle/gpt2-open-instruct-v1
tags:
- generated_from_trainer
- Transformers
- GPT2
model-index:
- name: hh-rlhf
results: []
datasets:
- Anthropic/hh-rlhf
- hakurei/open-instruct-v1
tokenizers:
- GPT2Tokenizer
language:
- en
library_name: transformers
metrics:
- bleu
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hh-rlhf
This model is a fine-tuned version of [vicgalle/gpt2-open-instruct-v1](https://huggingface.co/vicgalle/gpt2-open-instruct-v1) on an subset (15k) of the Anthropic/hh-rlhf dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1534
This model responds to the 'Human:' or 'Assistant:' prompt pretty well in conversation situations.
The shorter responses are better suited. Keep generation length to a reasonable subset. Left to its own devices it will have some pretty esoteric responses.
These include fairly uncensored remarks and at times violent outbursts. Especially if asking questions.
Needs vetting for other textual uses.
```
Human: Insane clown posse says...
Human: Should we look for a woman?
Assistant: It’s okay if you’re having a tough time finding what you are looking for. It’s a common question people might come up with for an argument or misunderstanding. What are you looking for, and what kind of woman would you have?
Human: Are you trying to find someone to argue
```
## Model description
GPT2 open instruct was trained on the open-instruct dataset fully. The reimagines one LM head as a partial rhlf adapter, with subtle reinforcements.
## Intended uses & limitations
Intended to study the intersection of instruct models and prompting that focuses on subtle exchanges of prompting. This probably needs to be refined substantially at this point.
## Training and evaluation data
```python
Train dataset size: 15000
Test dataset size: 500
Dataset({
features: ['chosen', 'rejected'],
num_rows: 15000
})
Dataset({
features: ['chosen', 'rejected'],
num_rows: 500
})
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.3108 | 1.0 | 7500 | 2.1799 |
| 2.265 | 2.0 | 15000 | 2.1632 |
| 2.2507 | 3.0 | 22500 | 2.1567 |
| 2.2519 | 4.0 | 30000 | 2.1534 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
vincentlui/test
|
vincentlui
| 2023-07-25T00:54:35Z | 71 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-20T23:26:13Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.4631
- Wer: 0.9466
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 25.8579 | 20.0 | 100 | 12.3269 | 0.9466 |
| 9.9109 | 40.0 | 200 | 11.4631 | 0.9466 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
DARIOLEV/llama2-qlora-finetunined-french
|
DARIOLEV
| 2023-07-25T00:53:32Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-25T00:53:28Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
skar01/llama2-coder
|
skar01
| 2023-07-25T00:45:40Z | 2 | 2 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-23T04:57:43Z |
---
library_name: peft
---
## Training procedure
Llama2 (7B) model fine-tuned on the CodeAlpaca 20k instructions dataset by using the method QLoRA with PEFT library.
Training and evaluation data 📚
CodeAlpaca_20K: contains 20K instruction-following data used for fine-tuning the Code Alpaca model.
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
ckandemir/distilhubert-finetuned-gtzan
|
ckandemir
| 2023-07-25T00:43:37Z | 149 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-24T19:54:03Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.88
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5878
- Accuracy: 0.88
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1351 | 1.0 | 113 | 1.9691 | 0.55 |
| 1.366 | 2.0 | 226 | 1.2824 | 0.71 |
| 1.1106 | 3.0 | 339 | 0.9803 | 0.72 |
| 0.9281 | 4.0 | 452 | 0.8342 | 0.73 |
| 0.625 | 5.0 | 565 | 0.6073 | 0.81 |
| 0.3546 | 6.0 | 678 | 0.6393 | 0.84 |
| 0.3526 | 7.0 | 791 | 0.5106 | 0.81 |
| 0.0914 | 8.0 | 904 | 0.3930 | 0.9 |
| 0.0563 | 9.0 | 1017 | 0.4089 | 0.88 |
| 0.0475 | 10.0 | 1130 | 0.5627 | 0.86 |
| 0.0144 | 11.0 | 1243 | 0.5824 | 0.86 |
| 0.0982 | 12.0 | 1356 | 0.5572 | 0.87 |
| 0.0082 | 13.0 | 1469 | 0.5770 | 0.88 |
| 0.0076 | 14.0 | 1582 | 0.5808 | 0.87 |
| 0.008 | 15.0 | 1695 | 0.5878 | 0.88 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
DunnBC22/wav2vec2-base-Drum_Kit_Sounds
|
DunnBC22
| 2023-07-25T00:32:49Z | 137 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"en",
"dataset:audiofolder",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-01-23T05:32:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: wav2vec2-base-Drum_Kit_Sounds
results: []
language:
- en
pipeline_tag: audio-classification
---
# wav2vec2-base-Drum_Kit_Sounds
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base).
It achieves the following results on the evaluation set:
- Loss: 1.0887
- Accuracy: 0.7812
- F1
- Weighted: 0.7692
- Micro: 0.7812
- Macro: 0.7845
- Recall
- Weighted: 0.7812
- Micro: 0.7812
- Macro: 0.8187
- Precision
- Weighted: 0.8717
- Micro: 0.7812
- Macro: 0.8534
## Model description
This is a multiclass classification of sounds to determine which type of drum is hit in the audio sample. The options are: kick, overheads, snare, and toms.
For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Audio-Projects/Classification/Audio-Drum_Kit_Sounds.ipynb
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/anubhavchhabra/drum-kit-sound-samples
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted F1 | Micro F1 | Macro F1 | Weighted Recall | Micro Recall | Macro Recall | Weighted Precision | Micro Precision | Macro Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:|
| 1.3743 | 1.0 | 4 | 1.3632 | 0.5625 | 0.5801 | 0.5625 | 0.5678 | 0.5625 | 0.5625 | 0.5670 | 0.6786 | 0.5625 | 0.6429 |
| 1.3074 | 2.0 | 8 | 1.3149 | 0.3438 | 0.2567 | 0.3438 | 0.2696 | 0.3438 | 0.3438 | 0.375 | 0.3067 | 0.3438 | 0.3148 |
| 1.2393 | 3.0 | 12 | 1.3121 | 0.2188 | 0.0785 | 0.2188 | 0.0897 | 0.2188 | 0.2188 | 0.25 | 0.0479 | 0.2188 | 0.0547 |
| 1.2317 | 4.0 | 16 | 1.3112 | 0.2812 | 0.1800 | 0.2812 | 0.2057 | 0.2812 | 0.2812 | 0.3214 | 0.2698 | 0.2812 | 0.3083 |
| 1.2107 | 5.0 | 20 | 1.2604 | 0.4375 | 0.3030 | 0.4375 | 0.3462 | 0.4375 | 0.4375 | 0.5 | 0.2552 | 0.4375 | 0.2917 |
| 1.1663 | 6.0 | 24 | 1.2112 | 0.4688 | 0.3896 | 0.4688 | 0.4310 | 0.4688 | 0.4688 | 0.5268 | 0.5041 | 0.4688 | 0.5404 |
| 1.1247 | 7.0 | 28 | 1.1746 | 0.5938 | 0.5143 | 0.5938 | 0.5603 | 0.5938 | 0.5938 | 0.6562 | 0.5220 | 0.5938 | 0.5609 |
| 1.0856 | 8.0 | 32 | 1.1434 | 0.5938 | 0.5143 | 0.5938 | 0.5603 | 0.5938 | 0.5938 | 0.6562 | 0.5220 | 0.5938 | 0.5609 |
| 1.0601 | 9.0 | 36 | 1.1417 | 0.6562 | 0.6029 | 0.6562 | 0.6389 | 0.6562 | 0.6562 | 0.7125 | 0.8440 | 0.6562 | 0.8217 |
| 1.0375 | 10.0 | 40 | 1.1227 | 0.6875 | 0.6582 | 0.6875 | 0.6831 | 0.6875 | 0.6875 | 0.7330 | 0.8457 | 0.6875 | 0.8237 |
| 1.0168 | 11.0 | 44 | 1.1065 | 0.7812 | 0.7692 | 0.7812 | 0.7845 | 0.7812 | 0.7812 | 0.8187 | 0.8717 | 0.7812 | 0.8534 |
| 1.0093 | 12.0 | 48 | 1.0887 | 0.7812 | 0.7692 | 0.7812 | 0.7845 | 0.7812 | 0.7812 | 0.8187 | 0.8717 | 0.7812 | 0.8534 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.12.1
|
radned/speecht5_voxpopuli_nl
|
radned
| 2023-07-25T00:28:06Z | 64 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-24T21:51:25Z |
---
base_model: ''
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_voxpopuli_nl
This model is a fine-tuned version of [](https://huggingface.co/) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2804 | 4.3 | 1000 | 1.1664 |
| 1.054 | 8.61 | 2000 | 0.9818 |
| 1.0183 | 12.91 | 3000 | 0.9600 |
| 1.0028 | 17.21 | 4000 | 0.9541 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.14.0
- Tokenizers 0.13.3
|
DunnBC22/vit-base-patch16-224-in21k_lung_and_colon_cancer
|
DunnBC22
| 2023-07-25T00:27:30Z | 1,841 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-01-06T22:39:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: vit-base-patch16-224-in21k_lung_and_colon_cancer
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9994
language:
- en
pipeline_tag: image-classification
---
# vit-base-patch16-224-in21k_lung_and_colon_cancer
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k).
It achieves the following results on the evaluation set:
- Loss: 0.0016
- Accuracy: 0.9994
- F1
- Weighted: 0.9994
- Micro: 0.9994
- Macro: 0.9994
- Recall
- Weighted: 0.9994
- Micro: 0.9994
- Macro: 0.9994
- Precision
- Weighted: 0.9994
- Micro: 0.9994
- Macro: 0.9994
## Model description
This is a multiclass image classification model of lung and colon cancers.
For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Computer%20Vision/Image%20Classification/Multiclass%20Classification/Lung%20%26%20Colon%20Cancer/Lung_and_colon_cancer_ViT.ipynb
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/andrewmvd/lung-and-colon-cancer-histopathological-images
_Sample Images From Dataset:_

## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted F1 | Micro F1 | Macro F1 | Weighted Recall | Micro Recall | Macro Recall | Weighted Precision | Micro Precision | Macro Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:|
| 0.0574 | 1.0 | 1250 | 0.0410 | 0.9864 | 0.9864 | 0.9864 | 0.9865 | 0.9864 | 0.9864 | 0.9864 | 0.9872 | 0.9864 | 0.9875 |
| 0.0031 | 2.0 | 2500 | 0.0105 | 0.9972 | 0.9972 | 0.9972 | 0.9972 | 0.9972 | 0.9972 | 0.9973 | 0.9972 | 0.9972 | 0.9972 |
| 0.0007 | 3.0 | 3750 | 0.0016 | 0.9994 | 0.9994 | 0.9994 | 0.9994 | 0.9994 | 0.9994 | 0.9994 | 0.9994 | 0.9994 | 0.9994 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1
- Datasets 2.5.2
- Tokenizers 0.12.1
|
Alfric/llama2-qlora-finetunined-french
|
Alfric
| 2023-07-25T00:18:27Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-25T00:18:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
DunnBC22/codebert-base-Password_Strength_Classifier
|
DunnBC22
| 2023-07-25T00:14:26Z | 98 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-06T04:17:51Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: codebert-base-Password_Strength_Classifier
results: []
---
# codebert-base-Password_Strength_Classifier
This model is a fine-tuned version of [microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base).
It achieves the following results on the evaluation set:
- Loss: 0.0077
- Accuracy: 0.9975
- F1
- Weighted: 0.9975
- Micro: 0.9975
- Macro: 0.9963
- Recall
- Weighted: 0.9975
- Micro: 0.9975
- Macro: 0.9978
- Precision
- Weighted: 0.9975
- Macro: 0.9948
- Micro: 0.9975
## Model description
The model classifies passwords as one of the following:
1) Weak
2) Medium
3) Strong
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Password%20Strength%20Classification%20(MC)/CodeBERT-Base%20-%20Password_Classifier.ipynb
## Intended uses & limitations
This is intended to show the possibilities. It is mainly limited by the input data.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/bhavikbb/password-strength-classifier-dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted F1 | Micro F1 | Macro F1 | Weighted Recall | Micro Recall | Macro Recall | Weighted Precision | Micro Precision | Macro Precision |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:|
| 0.0438 | 1.0 | 8371 | 0.0112 | 0.9956 | 0.9956 | 0.9956 | 0.9935 | 0.9956 | 0.9956 | 0.9963 | 0.9957 | 0.9956 | 0.9908 |
| 0.0133 | 2.0 | 16742 | 0.0092 | 0.9966 | 0.9967 | 0.9966 | 0.9951 | 0.9966 | 0.9966 | 0.9966 | 0.9967 | 0.9966 | 0.9935 |
| 0.0067 | 3.0 | 25113 | 0.0077 | 0.9975 | 0.9975 | 0.9975 | 0.9963 | 0.9975 | 0.9975 | 0.9978 | 0.9975 | 0.9975 | 0.9948 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|
fernandals/mt5-small-finetuned-xlsum-en-pt
|
fernandals
| 2023-07-25T00:11:11Z | 97 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"text-generation-inference",
"summarization",
"pt",
"en",
"dataset:csebuetnlp/xlsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-07-24T23:39:55Z |
---
datasets:
- csebuetnlp/xlsum
language:
- pt
- en
metrics:
- rouge
library_name: transformers
pipeline_tag: summarization
tags:
- text-generation-inference
---
To summarize a text you should put a language id in the beginning:
for texts in English add 'EN ' to your input
for texts in Portuguese add 'PT '
|
DCLXVIy/loraaa
|
DCLXVIy
| 2023-07-25T00:03:01Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-25T00:00:00Z |
---
license: creativeml-openrail-m
---
|
RushTurtle/crnn_vgg16_bn_20230724-201817
|
RushTurtle
| 2023-07-24T23:54:23Z | 44 | 0 |
transformers
|
[
"transformers",
"pytorch",
"en",
"endpoints_compatible",
"region:us"
] | null | 2023-07-24T23:54:18Z |
---
language: en
---
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
### Run Configuration
{
"arch": "crnn_vgg16_bn",
"train_path": "/tmp/dataset/train3_2800/",
"val_path": "/tmp/dataset/val3_2800/",
"train_samples": 1000,
"val_samples": 20,
"font": "FreeMono.ttf,FreeSans.ttf,FreeSerif.ttf",
"min_chars": 1,
"max_chars": 12,
"name": null,
"epochs": 1000,
"batch_size": 32,
"device": 0,
"input_size": 32,
"lr": 0.001,
"weight_decay": 0,
"workers": 16,
"resume": null,
"vocab": "french",
"test_only": false,
"show_samples": false,
"wb": true,
"push_to_hub": true,
"pretrained": false,
"sched": "cosine",
"amp": true,
"find_lr": false
}
|
VFiona/opus-mt-en-it-finetuned_20000-en-to-it
|
VFiona
| 2023-07-24T23:48:09Z | 95 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-24T22:21:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-it-finetuned_20000-en-to-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-it-finetuned_20000-en-to-it
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-it](https://huggingface.co/Helsinki-NLP/opus-mt-en-it) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2520
- Bleu: 74.7902
- Gen Len: 28.3805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.3022 | 1.0 | 1125 | 0.2520 | 74.7902 | 28.3805 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.13.1
- Tokenizers 0.11.0
|
minwook/CreateKoreanNovel
|
minwook
| 2023-07-24T23:45:53Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-24T14:32:35Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
jstawski/Llama-2-13b-hf-finetuned-SNG
|
jstawski
| 2023-07-24T23:32:04Z | 0 | 1 |
peft
|
[
"peft",
"conversational",
"en",
"license:llama2",
"region:us"
] |
text-generation
| 2023-07-24T03:25:41Z |
---
license: llama2
library_name: peft
language:
- en
pipeline_tag: conversational
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
FFusion/FFusionXL-LoRa-SDXL-Island-Generator
|
FFusion
| 2023-07-24T23:28:40Z | 100 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"en",
"base_model:diffusers/stable-diffusion-xl-base-0.9",
"base_model:adapter:diffusers/stable-diffusion-xl-base-0.9",
"doi:10.57967/hf/0919",
"license:other",
"region:us"
] |
text-to-image
| 2023-07-23T15:20:06Z |
---
license: other
base_model: diffusers/stable-diffusion-xl-base-0.9
instance_prompt: a 3d island
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
library_name: diffusers
badges:
- alt: Name
url: >-
https://img.shields.io/badge/Name-FFusion%20XL%20LoRA%20%F0%9F%8F%9D%EF%B8%8F%20%20Island%20Generator-89CFF0
src: >-
https://img.shields.io/badge/Name-FFusion%20XL%20LoRA%20%F0%9F%8F%9D%EF%B8%8F%20%20Island%20Generator-89CFF0
- alt: LoRA Type
url: https://img.shields.io/badge/LoRA%20Type-LyCORIS%2FLoKr%2C%20Prodigy-blue
src: https://img.shields.io/badge/LoRA%20Type-LyCORIS%2FLoKr%2C%20Prodigy-blue
- alt: Refiner Compatible
url: https://img.shields.io/badge/%F0%9F%94%A5%20Refiner%20Compatible-Yes-success
src: https://img.shields.io/badge/%F0%9F%94%A5%20Refiner%20Compatible-Yes-success
- alt: CLIP Tested
url: >-
https://img.shields.io/badge/%F0%9F%92%BB%20CLIP--ViT%2FG%20and%20CLIP--ViT%2FL%20tested-Yes-success
src: >-
https://img.shields.io/badge/%F0%9F%92%BB%20CLIP--ViT%2FG%20and%20CLIP--ViT%2FL%20tested-Yes-success
- alt: Trained Resolution
url: >-
https://img.shields.io/badge/Trained%20Resolution-1024%20x%201024%20pixels-yellow
src: >-
https://img.shields.io/badge/Trained%20Resolution-1024%20x%201024%20pixels-yellow
- alt: Training Data
url: https://img.shields.io/badge/Training%20Data-3%20x%203000%20images-orange
src: https://img.shields.io/badge/Training%20Data-3%20x%203000%20images-orange
- alt: Tested Resolution
url: >-
https://img.shields.io/badge/Tested%20Resolution-Up%20to%201600%20pixels-brightgreen
src: >-
https://img.shields.io/badge/Tested%20Resolution-Up%20to%201600%20pixels-brightgreen
- alt: Tested on
url: >-
https://img.shields.io/badge/Tested%20on-SDXL%200.9%20%26%20FFXL%200.001-blue
src: >-
https://img.shields.io/badge/Tested%20on-SDXL%200.9%20%26%20FFXL%200.001-blue
- alt: Hugging Face Model
url: https://img.shields.io/badge/Hugging%20Face-FFusion--BaSE-blue
src: https://img.shields.io/badge/Hugging%20Face-FFusion--BaSE-blue
- alt: GitHub
url: https://img.shields.io/badge/GitHub-1e--2-green
src: https://img.shields.io/badge/GitHub-1e--2-green
- alt: Facebook
url: https://img.shields.io/badge/Facebook-FFusionAI-blue
src: https://img.shields.io/badge/Facebook-FFusionAI-blue
- alt: Civitai
url: https://img.shields.io/badge/Civitai-FFusionAI-blue
src: https://img.shields.io/badge/Civitai-FFusionAI-blue
language:
- en
---
# FFusion XL LoRA 🏝️Island Generator
<div style="display: flex; flex-wrap: wrap; gap: 2px;">
<img src="https://img.shields.io/badge/%F0%9F%94%A5%20Refiner%20Compatible-Yes-success">
<img src="https://img.shields.io/badge/%F0%9F%92%BB%20CLIP--ViT%2FG%20and%20CLIP--ViT%2FL%20tested-Yes-success">
<img src="https://img.shields.io/badge/LoRA%20Type-LyCORIS%2FLoKr%2C%20Prodigy-blue">
<img src="https://img.shields.io/badge/Tested%20on-SDXL%200.9%20%26%20FFXL%200.001-blue">
</div>
The FFusion XL LoRA Island Generator is a model designed to generate game assets like islands and objects in low polygonal landscapes, pixelated, 3D, and isometric styles, making it ideal for retro-inspired or stylized game environments.
## Specifications
- **Model Name**: FFusion XL LoRA Island Generator
- **LoRA Type**: LyCORIS/LoKr, Prodigy
- **Trained Resolution**: 1024 x 1024 pixels
- **Tested Resolution**: Up to 1600 pixels
- **Training Data**: The model was trained on 9,000 images, consisting of 3 different resumes each on 3,000 images, providing a diverse dataset for learning.
<div style="display: flex; flex-wrap: wrap; gap: 4px;"><img src="https://img.shields.io/badge/Trained%20Resolution-1024%20x%201024%20pixels-yellow">
<img src="https://img.shields.io/badge/Training%20Data-3%20x%203000%20images-orange">
<img src="https://img.shields.io/badge/Tested%20Resolution-Up%20to%201600%20pixels-brightgreen"></div>

## Refiner Example


## Usage Recommendations
<img src="https://img.shields.io/badge/Name-FFusion%20XL%20LoRA%20%F0%9F%8F%9D%EF%B8%8F%20%20Island%20Generator-89CFF0">
The FFusion XL LoRA Island Generator can be utilized to quickly create game assets for a variety of game projects. It is best suited for applications where a retro or pixelated style is desired, and where low polygonal landscapes and 3D elements are prominent. Designers and developers can leverage the model to streamline the asset creation process, saving valuable time and resources.
## Limitations
- The model's performance may vary when generating assets beyond the tested resolution of 1600 pixels. Using it for higher resolutions may lead to a decrease in output quality or increased generation time.
- The model's ability to produce highly detailed or realistic assets might be limited due to its pixelated and low polygonal nature.
- It is recommended to thoroughly evaluate the generated assets to ensure they align with the desired artistic direction and quality standards of the specific game project.
## Ethical Considerations
As with any AI model, ethical considerations should be taken into account during usage. It is essential to avoid generating assets that might infringe upon copyrighted material or violate intellectual property rights. Additionally, the model's outputs should be monitored to ensure it does not produce inappropriate, offensive, or harmful content.
## Citations
If you use the FFusion XL LoRA Island Generator in your project or research, please provide appropriate citations to acknowledge the model's contribution.
## Disclaimer
The FFusion XL LoRA Island Generator is a powerful tool for generating game assets; however, it may not be perfect and might have limitations. Users are encouraged to test and validate the generated assets thoroughly before integrating them into their game projects. The developers of this model hold no responsibility for any consequences that may arise from its usage.
<div style="display: flex; flex-wrap: wrap; gap: 2px; align-items: center;">
<p>These are LoRA adaption weights for</p>
<a href="https://huggingface.co/stable-diffusion-xl-base-0.9" target="_new" rel="ugc"><img src="https://img.shields.io/badge/stable--diffusion--xl--base--0.9-Model-blue" alt="stable-diffusion-xl-base-0.9"></a>
<p>&</p>
<a href="https://huggingface.co/FFusionXL-09-SDXL" target="_new" rel="ugc"><img src="https://img.shields.io/badge/FFusionXL--09--SDXL-Model-blue" alt="FFusionXL-09-SDXL"></a>
<p>The weights were trained using experimental</p>
<a href="https://github.com/kohya-ss/sd-scripts" target="_new" rel="ugc"><img src="https://img.shields.io/badge/kohya--ss%2Fsd--scripts%20build-Model-blue" alt="kohya-ss/sd-scripts build"></a>
<p>build</p>
</div>







<div style="display: flex; flex-wrap: wrap; gap: 2px;">
<a href="https://huggingface.co/FFusion/FFusion-BaSE" target="_new" rel="ugc"><img src="https://img.shields.io/badge/Hugging%20Face-FFusion--BaSE-blue" alt="Hugging Face Model"></a>
<a href="https://github.com/1e-2" target="_new" rel="ugc"><img src="https://img.shields.io/badge/GitHub-1e--2-green" alt="GitHub"></a>
<a href="https://www.facebook.com/FFusionAI/" target="_new" rel="ugc"><img src="https://img.shields.io/badge/Facebook-FFusionAI-blue" alt="Facebook"></a>
<a href="https://civitai.com/models/82039/ffusion-ai-sd-21" target="_new" rel="ugc"><img src="https://img.shields.io/badge/Civitai-FFusionAI-blue" alt="Civitai"></a>
</div>
<div style="display: flex; flex-wrap: wrap; gap: 10px; align-items: center;">
<p>These are LoRA adaption weights for</p>
<a href="https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9" target="_new" rel="ugc"><img src="https://img.shields.io/badge/stable--diffusion--xl--base--0.9-Model-purple" alt="stable-diffusion-xl-base-0.9"></a>
<p>&</p>
<a href="https://huggingface.co/FFusion/FFusionXL-09-SDXL" target="_new" rel="ugc"><img src="https://img.shields.io/badge/FFusionXL--09--SDXL-Model-pink" alt="FFusionXL-09-SDXL"></a>
<p>The weights were trained using experimental</p>
<a href="https://github.com/kohya-ss/sd-scripts" target="_new" rel="ugc"><img src="https://img.shields.io/badge/kohya--ss-sd--scripts-blue" alt="kohya-ss/sd-scripts build"></a>
<p>build</p>
</div>
**Attribution:**
"SDXL 0.9 is licensed under the SDXL Research License, Copyright (c) Stability AI Ltd. All Rights Reserved."
## License
[SDXL 0.9 Research License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9/blob/main/LICENSE.md)
|
s4saif/llma-finetuned-7b
|
s4saif
| 2023-07-24T23:25:12Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:daryl149/llama-2-7b-chat-hf",
"base_model:finetune:daryl149/llama-2-7b-chat-hf",
"region:us"
] | null | 2023-07-24T21:40:34Z |
---
base_model: daryl149/llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: llma-finetuned-7b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llma-finetuned-7b
This model is a fine-tuned version of [daryl149/llama-2-7b-chat-hf](https://huggingface.co/daryl149/llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.3
|
FFusion/FFusion-BaSE
|
FFusion
| 2023-07-24T23:12:07Z | 258 | 7 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"di.ffusion.ai",
"art",
"base model",
"en",
"doi:10.57967/hf/0926",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-01T08:00:08Z |
---
license: creativeml-openrail-m
language:
- en
pipeline_tag: text-to-image
tags:
- stable-diffusion
- text-to-image
- di.ffusion.ai
- art
- base model
- diffusers
inference: true
library_name: diffusers
widget:
- text: >-
a sprinkled donut sitting on top of a table, blender donut tutorial,
colorful hyperrealism, everything is made of candy, hyperrealistic digital
painting, covered in sprinkles and crumbs, vibrant colors hyper realism,
colorful smoke explosion background
example_title: Donut Fusion
- text: >-
a cup of coffee with a tree in it, surreal art, awesome great composition,
surrealism!!!!, cafe in the clouds, perfectly realistic yet surreal, surreal
realistic, floating trees, amazing composition, dream scenery art, whimsical
surrealism, surreal composition, trending artistic art, surrealism art,
surreal scene, surrealistic painting, surreal style, surreal illustration,
dreamlike surrealism colorful smoke and fire coming out of it,explosion of
data fragments,exploding background,realistic explosion,3d digital art
4k,fire and explosion,explosion,background explosion,cinema 4 d
art,shattering,beeple. hyperrealism,explosion background,rendered in cinema
4 d,rendered in cinema4d,explosive background,
example_title: Coffee Fusion
- text: >-
brightly colored headphones with a splash of paint and music notes, vibing
to music, artistic illustration, stunning artwork, music is life, beautiful
digital artwork, headphones on, listening to music, music poster,
synesthesia, music in the air, listening to godly music, style hybrid mix of
beeple, headphones, digital artwork 4 k, side profile artwork, no humans,
planet, space, black background, cable, simple background, concept art,
cinematic, dramatic, intricate details, dark lighting
example_title: Headset Fusion
- text: >-
a group of three blocks with a picture of a boat in the middle of them,
surreal 3 d render, 3 d epic illustrations, 3 d artistic render, inspired by
Matthias Jung, environmental key art, erik johansson style, surreal concept
art, alexander jansson style, cube portals, beeple masterpiece, 3 d render
beeple, surrealistic digital artwork
example_title: Digital Fusion
---
# FFUSION AI - 768 BaSE Public alpha Release

## Model Overview: Unleashing the Power of Imagination!
<div style="display: flex; flex-wrap: wrap; gap: 2px;">
<a href="https://huggingface.co/FFusion/"><img src="https://img.shields.io/badge/🧠%20Model%20Type-Diffusion--based%20text--to--image%20generation%20model-blueviolet" alt="Model Type"></a>
<a href="https://huggingface.co/FFusion/"><img src="https://img.shields.io/badge/🔏%20License-CreativeML%20Open%20RAIL++--M%20License-orange" alt="License"></a>
<a href="https://huggingface.co/FFusion/"><img src="https://img.shields.io/badge/🖥️%20Hardware%20Type-A100%20PCIe%2040GB-green" alt="Hardware Type"></a>
<a href="https://huggingface.co/FFusion/"><img src="https://img.shields.io/badge/⏰%20Hours%20Used-1190-red" alt="Hours Used"></a>
<a href="https://huggingface.co/FFusion/"><img src="https://img.shields.io/badge/🌩️%20Cloud%20Provider-CoreWeave%20%26%20Runpod-blue" alt="Cloud Provider"></a>
<a href="https://huggingface.co/FFusion/"><img src="https://img.shields.io/badge/🍃%20Carbon%20Emitted-124.95%20kg%20of%20CO2-brightgreen" alt="Carbon Emitted"></a>
</div>
FFUSION AI is a state-of-the-art image generation and transformation tool, developed around the leading Latent Diffusion Model. Leveraging Stable Diffusion 2.1, FFUSION AI converts your prompts into captivating artworks. Discover an imaginative landscape where ideas come to life in vibrant, surreal visuals.
- **Developed by:** Idle Stoev, Source Code Bulgaria, Praesidium CX & BlackSwan Technologies
- **Shared by:** FFusion AI
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s) (NLP):** English
- **License:** CreativeML Open RAIL++-M License
## Model Use: Enabling Creativity and Exploring AI Frontiers

Designed for research and artistic exploration, FFUSION AI serves as a versatile tool in a variety of scenarios:
- Investigating biases and constraints in generative models
- Unleashing creativity in artistic endeavors
- Infusing AI-driven innovation into educational or creative tools
- Furthering research in the exciting field of generative models
- **Repository:** https://github.com/1e-2
- **Demo:** https://huggingface.co/spaces/FFusion/FFusionAI-Streamlit-Playground

**Out-of-Scope Use and Prohibited Misuse:**
- Generating factually inaccurate representations of people or events
- Inflicting harm or spreading malicious content such as demeaning, dehumanizing, or offensive imagery
- Creating harmful stereotypes or spreading discrimination
- Impersonating individuals without their consent
- Disseminating non-consensual explicit content or misinformation
- Violating copyrights or usage terms of licensed material
## Model Limitations and Bias
While our model brings us closer to the future of AI-driven creativity, there are several limitations:
- Achieving perfect photorealism or surrealism is still an ongoing challenge.
- Rendering legible text could be difficult without further ~30min training on your brand.
- Accurate generation of human faces, especially far away faces, is not guaranteed (yet).
## Model Releases
We are thrilled to announce:
- **Version 512 Beta:** Featuring LiTE and MiD BFG model variations
- **Version 768 Alpha:** BaSE, FUSION, and FFUSION models with enhanced training capabilities, including LoRa, LyCORIS, Dylora & Kohya-ss/sd-scripts.
- **Version 768 BaSE:** A BaSE Ready model for easy applying more than 200 build op LoRA models trained along the way.
## Environmental Impact
In line with our commitment to sustainability, FFUSION AI has been designed with carbon efficiency in mind:
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 1190
- **Cloud Provider:** CoreWeave & Runpod (official partner)
- **Compute Region:** US Cyxtera Chicago Data Center - ORD1 / EU - CZ & EU - RO
- **Carbon Emitted:** 124.95 kg of CO2 (calculated via Machine Learning Impact calculator)
That said all LoRA and further models are based on initial training.
## Model Card Authors
This model card was authored by Idle Stoev and is based on the Stability AI - Stable Diffusion 2.1 model card.
## Model Card Contact
[](https://huggingface.co/FFusion/FFusion-BaSE)
[](https://huggingface.co/FFusion/di.FFUSION.ai-v2.1-768-BaSE-alpha)
[](https://huggingface.co/FFusion/di.ffusion.ai.Beta512)
[](https://huggingface.co/FFusion/FFUSION.ai-Text-Encoder-LyCORIS-SD-2.1)
Contact:
[](mailto:[email protected])
_Download the [FFUSION AI - 768 BaSE Release here](https://huggingface.co/FFusion/FFusion-BaSE/blob/main/FFusion-BaSE.safetensors)._
|
taytun/llama2-qlora-qr-en
|
taytun
| 2023-07-24T23:05:29Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-24T23:05:20Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
cgallegoan/disasterBERT
|
cgallegoan
| 2023-07-24T22:36:57Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-24T22:01:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: disasterBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# disasterBERT
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4232
- Accuracy: 0.8372
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 86 | 0.4058 | 0.8299 |
| No log | 2.0 | 172 | 0.3936 | 0.8343 |
| No log | 3.0 | 258 | 0.3999 | 0.8408 |
| No log | 4.0 | 344 | 0.4123 | 0.8379 |
| No log | 5.0 | 430 | 0.4232 | 0.8372 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
totally-not-an-llm/AlpacaCielo-13b
|
totally-not-an-llm
| 2023-07-24T22:31:51Z | 0 | 23 | null |
[
"license:llama2",
"region:us"
] | null | 2023-07-22T23:01:32Z |
---
license: llama2
---
# AlpacaCielo-13b
<figure>
<img src="https://huggingface.co/totally-not-an-llm/AlpacaCielo-13b/resolve/main/alpaca.png" alt="cute cloud alpaca">
<figcaption style="font-size: 1em;"><i>"super cute baby alpaca laying on a cloud", Model: epicrealism_pureEvolutionV3</i></figcaption>
</figure>
AlpacaCielo-13b is a llama-2 based model designed for creative tasks, such as storytelling and roleplay, while still doing well with other chatbot purposes. It is a triple model merge of Nous-Hermes + Guanaco + Storywriter. While it is mostly *"uncensored"*, it still inherits some alignment from Guanaco.
[GPTQ quants](https://huggingface.co/TheBloke/AlpacaCielo-13B-GPTQ)<br>
[GGML quants](https://huggingface.co/TheBloke/AlpacaCielo-13B-GGML)<br>
(Courtesy of TheBloke)
**Prompt format is this (Guanaco QLORA in oobabooga):**
```
### Human: {prompt}
### Assistant:
```
*Thanks to previous similar models such as Alpacino, Alpasta, and AlpacaDente for inspiring the creation of this model. Thanks also to the creators of the models involved in the merge. Original models:*
- [Nous-Hermes-Llama-2](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)
- [Guanaco QLoRA](https://huggingface.co/Mikael110/llama-2-13b-guanaco-qlora)
- [Storywriter LoRA](https://huggingface.co/Blackroot/Llama-2-13B-Storywriter-LORA)
|
allenwang117/Reinforce-1
|
allenwang117
| 2023-07-24T22:17:52Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-24T22:17:44Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
digiplay/helloRealisticMan_v1.0beta
|
digiplay
| 2023-07-24T22:16:05Z | 307 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-24T22:01:31Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/115278/hellorealisticman
Original Author's DEMO image :

|
arpan-das-astrophysics/taxi-v3
|
arpan-das-astrophysics
| 2023-07-24T22:10:24Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-24T22:10:22Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="arpan-das-astrophysics/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kgwiazda/ppo-LunarLander-v2
|
kgwiazda
| 2023-07-24T21:59:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-24T21:59:04Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 224.12 +/- 27.28
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mskhattori/wav2vec2phone-large-xlsr-jp-jdrtfw07-demo3
|
mskhattori
| 2023-07-24T21:46:58Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:jonatasgrosman/wav2vec2-large-xlsr-53-japanese",
"base_model:finetune:jonatasgrosman/wav2vec2-large-xlsr-53-japanese",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-24T17:25:00Z |
---
license: apache-2.0
base_model: jonatasgrosman/wav2vec2-large-xlsr-53-japanese
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2phone-large-xlsr-jp-jdrtfw07-demo3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2phone-large-xlsr-jp-jdrtfw07-demo3
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-japanese](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-japanese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0451
- Wer: 0.025
- Cer: 0.0195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1640
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 3.0002 | 1.0 | 328 | 2.8482 | 0.9965 | 0.9980 |
| 2.2134 | 2.0 | 656 | 2.0964 | 0.8864 | 0.9374 |
| 1.6625 | 3.0 | 984 | 1.2925 | 0.7960 | 0.7917 |
| 1.0846 | 4.0 | 1312 | 0.4955 | 0.3292 | 0.3089 |
| 0.8593 | 5.0 | 1640 | 0.2443 | 0.1061 | 0.0839 |
| 0.7502 | 6.0 | 1968 | 0.1552 | 0.0620 | 0.0504 |
| 0.6342 | 7.0 | 2296 | 0.1202 | 0.0491 | 0.0422 |
| 0.6012 | 8.0 | 2624 | 0.1008 | 0.0442 | 0.0379 |
| 0.6214 | 9.0 | 2952 | 0.0846 | 0.0390 | 0.0333 |
| 0.5486 | 10.0 | 3280 | 0.0756 | 0.0361 | 0.0309 |
| 0.5633 | 11.0 | 3608 | 0.0669 | 0.0310 | 0.0261 |
| 0.4977 | 12.0 | 3936 | 0.0609 | 0.0300 | 0.0251 |
| 0.4863 | 13.0 | 4264 | 0.0556 | 0.0282 | 0.0233 |
| 0.4416 | 14.0 | 4592 | 0.0533 | 0.0279 | 0.0234 |
| 0.4713 | 15.0 | 4920 | 0.0499 | 0.0266 | 0.0214 |
| 0.4715 | 16.0 | 5248 | 0.0489 | 0.0252 | 0.0197 |
| 0.4724 | 17.0 | 5576 | 0.0472 | 0.0259 | 0.0205 |
| 0.4835 | 18.0 | 5904 | 0.0463 | 0.0259 | 0.0204 |
| 0.4589 | 19.0 | 6232 | 0.0454 | 0.0252 | 0.0199 |
| 0.4207 | 20.0 | 6560 | 0.0451 | 0.025 | 0.0195 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/all-base-log-rarity
|
NasimB
| 2023-07-24T21:43:55Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-24T18:12:24Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: all-base-log-rarity
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-base-log-rarity
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 5.4679
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.488 | 0.31 | 500 | 5.6024 |
| 5.1781 | 0.62 | 1000 | 5.3432 |
| 4.832 | 0.94 | 1500 | 5.1744 |
| 4.5773 | 1.25 | 2000 | 5.1710 |
| 4.4585 | 1.56 | 2500 | 5.1043 |
| 4.3631 | 1.87 | 3000 | 5.0802 |
| 4.1959 | 2.19 | 3500 | 5.0614 |
| 4.1063 | 2.5 | 4000 | 5.1289 |
| 4.0747 | 2.81 | 4500 | 5.0496 |
| 3.9482 | 3.12 | 5000 | 5.1552 |
| 3.8193 | 3.44 | 5500 | 5.1364 |
| 3.804 | 3.75 | 6000 | 5.1111 |
| 3.7357 | 4.06 | 6500 | 5.2502 |
| 3.5478 | 4.37 | 7000 | 5.2565 |
| 3.5409 | 4.68 | 7500 | 5.3273 |
| 3.5276 | 5.0 | 8000 | 5.3765 |
| 3.3651 | 5.31 | 8500 | 5.4361 |
| 3.359 | 5.62 | 9000 | 5.4664 |
| 3.3559 | 5.93 | 9500 | 5.4678 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
benjamin/compoundpiece-stage1
|
benjamin
| 2023-07-24T21:32:13Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"af",
"az",
"be",
"bg",
"bn",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gl",
"gu",
"he",
"hi",
"hu",
"hy",
"id",
"is",
"it",
"ka",
"kk",
"ky",
"la",
"lt",
"lv",
"mg",
"mk",
"ml",
"mt",
"nl",
"pa",
"pl",
"pt",
"ro",
"ru",
"sk",
"sq",
"sv",
"ta",
"te",
"th",
"tr",
"uk",
"yi",
"yo",
"dataset:benjamin/compoundpiece",
"arxiv:2305.14214",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-13T12:02:23Z |
---
license: mit
language:
- af
- az
- be
- bg
- bn
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gl
- gu
- he
- hi
- hu
- hy
- id
- is
- it
- ka
- kk
- ky
- la
- lt
- lv
- mg
- mk
- ml
- mt
- nl
- pa
- pl
- pt
- ro
- ru
- sk
- sq
- sv
- ta
- te
- th
- tr
- uk
- yi
- yo
datasets:
- benjamin/compoundpiece
---
CompoundPiece model trained only on Stage 1 training data (self-supervised training on hyphenated and non-hyphenated words scraped from the web). See [CompoundPiece: Evaluating and Improving Decompounding Performance of Language Models](https://arxiv.org/abs/2305.14214).
# Citation
```
@article{minixhofer2023compoundpiece,
title={CompoundPiece: Evaluating and Improving Decompounding Performance of Language Models},
author={Minixhofer, Benjamin and Pfeiffer, Jonas and Vuli{\'c}, Ivan},
journal={arXiv preprint arXiv:2305.14214},
year={2023}
}
```
# License
MIT
|
snicolau/a2c-PandaReachDense-v2
|
snicolau
| 2023-07-24T21:30:26Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-24T21:27:42Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.93 +/- 0.43
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
rovargasc/setfit-model_clasificadorEstudiantesV2
|
rovargasc
| 2023-07-24T21:16:53Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-24T21:16:27Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# rovargasc/setfit-model_clasificadorEstudiantesV2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("rovargasc/setfit-model_clasificadorEstudiantesV2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
anth0nyhak1m/CFGFP_ProductGroupCalssifier_v1
|
anth0nyhak1m
| 2023-07-24T21:08:37Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-29T16:48:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: CFGFP_ProductGroupCalssifier_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CFGFP_ProductGroupCalssifier_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1960
- Accuracy: 0.9644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2342 | 1.0 | 3804 | 0.1990 | 0.9464 |
| 0.1457 | 2.0 | 7608 | 0.1844 | 0.9567 |
| 0.1083 | 3.0 | 11412 | 0.1864 | 0.9602 |
| 0.0675 | 4.0 | 15216 | 0.1943 | 0.9641 |
| 0.0464 | 5.0 | 19020 | 0.1960 | 0.9644 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.0
- Tokenizers 0.13.3
|
HaroldB/LLama-2-7B
|
HaroldB
| 2023-07-24T21:07:07Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"llama",
"region:us"
] | null | 2023-07-24T20:43:36Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
eccadena/test_model
|
eccadena
| 2023-07-24T20:57:05Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-24T20:54:56Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
Gelmo/Halouf
|
Gelmo
| 2023-07-24T20:54:19Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"conversational",
"en",
"dataset:fka/awesome-chatgpt-prompts",
"dataset:Open-Orca/OpenOrca",
"dataset:ehartford/dolphin",
"dataset:OpenAssistant/oasst1",
"region:us"
] |
text-generation
| 2023-07-24T20:52:28Z |
---
datasets:
- fka/awesome-chatgpt-prompts
- Open-Orca/OpenOrca
- ehartford/dolphin
- OpenAssistant/oasst1
language:
- en
metrics:
- code_eval
- accuracy
library_name: adapter-transformers
pipeline_tag: conversational
---
|
snicolau/a2c-AntBulletEnv-v0
|
snicolau
| 2023-07-24T20:37:28Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-24T20:36:17Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1662.96 +/- 314.55
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NasimB/all-base-rarity
|
NasimB
| 2023-07-24T20:16:05Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-24T16:51:11Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: all-base-rarity
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-base-rarity
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.4936 | 0.31 | 500 | 5.4520 |
| 5.1959 | 0.62 | 1000 | 5.1240 |
| 4.8668 | 0.94 | 1500 | 4.9353 |
| 4.6074 | 1.25 | 2000 | 4.8948 |
| 4.4893 | 1.56 | 2500 | 4.8297 |
| 4.3974 | 1.87 | 3000 | 4.7761 |
| 4.2253 | 2.19 | 3500 | 4.7494 |
| 4.1376 | 2.5 | 4000 | 4.7360 |
| 4.1096 | 2.81 | 4500 | 4.7311 |
| 3.9762 | 3.12 | 5000 | 4.7291 |
| 3.8468 | 3.44 | 5500 | 4.7377 |
| 3.8328 | 3.75 | 6000 | 4.7239 |
| 3.7659 | 4.06 | 6500 | 4.7433 |
| 3.5741 | 4.37 | 7000 | 4.7670 |
| 3.5658 | 4.68 | 7500 | 4.7583 |
| 3.5516 | 5.0 | 8000 | 4.7554 |
| 3.385 | 5.31 | 8500 | 4.7837 |
| 3.3829 | 5.62 | 9000 | 4.7885 |
| 3.3787 | 5.93 | 9500 | 4.7913 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.