modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-01 06:28:43
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 546
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-01 06:27:36
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
AtAndDev/ShortKing-1.4b-v0.1
|
AtAndDev
| 2023-09-29T20:30:08Z | 2,430 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"en",
"dataset:vicgalle/alpaca-gpt4",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-25T20:26:25Z |
---
license: cc-by-nc-4.0
datasets:
- vicgalle/alpaca-gpt4
language:
- en
---
## Model Overview
Model license: cc-by-nc-4.0<br>
This model is trained based on [EleutherAI/pythia-1.4b-deduped](https://huggingface.co/EleutherAI/pythia-1.4b-deduped) model that is LoRA finetuned on [vicgalle/alpaca-gpt4](https://huggingface.co/datasets/vicgalle/alpaca-gpt4) dataset.<br>
## Prompt Template: `Alpaca`
```
<system_prompt>
### Instruction:
<user_message>
### Response:
<assistant_response>
```
## Intended Use
THIS IS A TEST MODEL, IT IS NOT INTENDED FOR REAL APPLICATIONS BY ANY MEANS. HOWEVER, A NEW MODEL IS COMING IN THE SAME TOPIC.<br>
This model series will be used for small but intense applications.
## Training Details
This model took `2:31:23` to train in QLoRA on a single `T4` GPU.<br>
- *epochs*: `1`
- *train batch size*: `12`
- *eval batch size*: `12`
- *gradient accumulation steps*: `1`
- *maximum gradient normal*: `0.3`
- *learning rate*: `2e-4`
- *weight decay*: `0.001`
- *optimizer*: `paged_adamw_32bit`
- *learning rate schedule*: `cosine`
- *warmup ratio (linear)*: `0.03`
|
Schadom/Reinforce-CartPole-v1
|
Schadom
| 2023-09-29T20:29:44Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-29T20:29:41Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 133.60 +/- 38.68
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Msughterx/wav2vec2-base-xlsr-igbo
|
Msughterx
| 2023-09-29T20:22:18Z | 133 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-29T19:54:02Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-xlsr-igbo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-xlsr-igbo
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
BatatinhaFeliz/distilbert-base-uncased-finetuned-cola
|
BatatinhaFeliz
| 2023-09-29T20:14:02Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-29T20:10:01Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5383825234212567
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8280
- Matthews Correlation: 0.5384
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5264 | 1.0 | 535 | 0.4741 | 0.4662 |
| 0.3548 | 2.0 | 1070 | 0.5194 | 0.4871 |
| 0.232 | 3.0 | 1605 | 0.5937 | 0.5268 |
| 0.1786 | 4.0 | 2140 | 0.7739 | 0.5286 |
| 0.135 | 5.0 | 2675 | 0.8280 | 0.5384 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Mel-Iza0/semantic-search-test
|
Mel-Iza0
| 2023-09-29T20:12:32Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-09-29T18:47:37Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3181 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3181,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
raalst/RobBERT-v2-nl-ext-qa
|
raalst
| 2023-09-29T20:10:49Z | 115 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-09-25T20:43:10Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
q/a model, structurally same as RobBERT-v2-nl-qa, but trained with an augmented dataset.
sentences from the context not containing the answer-span have been moved from before the
answer to after the answer, and v.v.
The start of the answer has been adapted accordingly. these modified records have an "m"
appended to their ID field.
## Model Details
Results seem better than RobBERT-v2-nl-qa:
{'exact': 65.97542490405392,
'f1': 73.36792208890036,
'total': 31007,
'HasAns_exact': 62.55334441399757,
'HasAns_f1': 72.85023854321435,
'HasAns_total': 22261,
'NoAns_exact': 74.68557054653556,
'NoAns_f1': 74.68557054653556,
'NoAns_total': 8746,
'best_exact': 65.97542490405392,
'best_exact_thresh': 0.0,
'best_f1': 73.3679220889002,
'best_f1_thresh': 0.0}
### Model Description
example dutch question and context for the hosted inference api:
Q: Op welke wijze heeft de termiet zich kunnen verspreiden ?
CX: De koloniën zijn verspreid over twee woningen, bijgebouwen en tuinen in Zuid-Holland.
Een van de panden is een groot kassencomplex. Daaruit zijn meerdere planten verkocht,
waardoor het mogelijk is dat de termiet zich al verder heeft verspreid.
Eerdere pogingen om de koloniën uit te roeien zijn mislukt.
De plantenverkoop vanuit het koloniegebied is inmiddels tijdelijk stopgezet.
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bedus-creation/mBart-small-dataset-ii-eng-lim-003
|
bedus-creation
| 2023-09-29T20:07:50Z | 33 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:bedus-creation/mBart-small-dataset-ii-eng-lim-003",
"base_model:finetune:bedus-creation/mBart-small-dataset-ii-eng-lim-003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-28T20:18:17Z |
---
license: apache-2.0
base_model: bedus-creation/mBart-small-dataset-ii-eng-lim-003
tags:
- generated_from_keras_callback
model-index:
- name: bedus-creation/mBart-small-dataset-ii-eng-lim-003
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bedus-creation/mBart-small-dataset-ii-eng-lim-003
This model is a fine-tuned version of [bedus-creation/mBart-small-dataset-ii-eng-lim-003](https://huggingface.co/bedus-creation/mBart-small-dataset-ii-eng-lim-003) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1015
- Validation Loss: 0.4146
- Epoch: 149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.2093 | 0.2072 | 0 |
| 0.2068 | 0.2056 | 1 |
| 0.2062 | 0.2023 | 2 |
| 0.2045 | 0.2054 | 3 |
| 0.2027 | 0.2188 | 4 |
| 0.2019 | 0.2067 | 5 |
| 0.1997 | 0.2056 | 6 |
| 0.1991 | 0.2074 | 7 |
| 0.1978 | 0.2024 | 8 |
| 0.1962 | 0.2067 | 9 |
| 0.1955 | 0.2074 | 10 |
| 0.1945 | 0.2089 | 11 |
| 0.1928 | 0.2168 | 12 |
| 0.1907 | 0.2201 | 13 |
| 0.1900 | 0.2102 | 14 |
| 0.1888 | 0.2130 | 15 |
| 0.1882 | 0.2211 | 16 |
| 0.1870 | 0.2117 | 17 |
| 0.1857 | 0.2134 | 18 |
| 0.1838 | 0.2147 | 19 |
| 0.1824 | 0.2187 | 20 |
| 0.1812 | 0.2224 | 21 |
| 0.1813 | 0.2249 | 22 |
| 0.1798 | 0.2200 | 23 |
| 0.1787 | 0.2273 | 24 |
| 0.1772 | 0.2263 | 25 |
| 0.1780 | 0.2273 | 26 |
| 0.1764 | 0.2270 | 27 |
| 0.1754 | 0.2245 | 28 |
| 0.1738 | 0.2260 | 29 |
| 0.1730 | 0.2327 | 30 |
| 0.1720 | 0.2300 | 31 |
| 0.1702 | 0.2347 | 32 |
| 0.1698 | 0.2396 | 33 |
| 0.1689 | 0.2340 | 34 |
| 0.1693 | 0.2345 | 35 |
| 0.1661 | 0.2424 | 36 |
| 0.1663 | 0.2388 | 37 |
| 0.1658 | 0.2436 | 38 |
| 0.1654 | 0.2506 | 39 |
| 0.1639 | 0.2406 | 40 |
| 0.1635 | 0.2524 | 41 |
| 0.1619 | 0.2379 | 42 |
| 0.1609 | 0.2449 | 43 |
| 0.1602 | 0.2466 | 44 |
| 0.1602 | 0.2537 | 45 |
| 0.1586 | 0.2457 | 46 |
| 0.1576 | 0.2589 | 47 |
| 0.1573 | 0.2547 | 48 |
| 0.1566 | 0.2532 | 49 |
| 0.1546 | 0.2565 | 50 |
| 0.1540 | 0.2544 | 51 |
| 0.1545 | 0.2637 | 52 |
| 0.1515 | 0.2580 | 53 |
| 0.1520 | 0.2654 | 54 |
| 0.1524 | 0.2650 | 55 |
| 0.1513 | 0.2701 | 56 |
| 0.1500 | 0.2767 | 57 |
| 0.1492 | 0.2646 | 58 |
| 0.1483 | 0.2696 | 59 |
| 0.1480 | 0.2729 | 60 |
| 0.1475 | 0.2709 | 61 |
| 0.1458 | 0.2757 | 62 |
| 0.1460 | 0.2778 | 63 |
| 0.1446 | 0.2775 | 64 |
| 0.1440 | 0.2727 | 65 |
| 0.1438 | 0.2862 | 66 |
| 0.1444 | 0.2719 | 67 |
| 0.1423 | 0.2827 | 68 |
| 0.1418 | 0.2830 | 69 |
| 0.1402 | 0.2787 | 70 |
| 0.1404 | 0.2799 | 71 |
| 0.1388 | 0.2857 | 72 |
| 0.1392 | 0.2889 | 73 |
| 0.1398 | 0.2868 | 74 |
| 0.1389 | 0.2920 | 75 |
| 0.1359 | 0.3010 | 76 |
| 0.1369 | 0.2873 | 77 |
| 0.1366 | 0.2921 | 78 |
| 0.1358 | 0.2895 | 79 |
| 0.1343 | 0.3071 | 80 |
| 0.1344 | 0.2981 | 81 |
| 0.1341 | 0.3033 | 82 |
| 0.1328 | 0.3008 | 83 |
| 0.1332 | 0.2933 | 84 |
| 0.1317 | 0.3155 | 85 |
| 0.1310 | 0.3091 | 86 |
| 0.1307 | 0.3205 | 87 |
| 0.1295 | 0.3142 | 88 |
| 0.1295 | 0.3141 | 89 |
| 0.1299 | 0.3103 | 90 |
| 0.1282 | 0.3209 | 91 |
| 0.1284 | 0.3167 | 92 |
| 0.1272 | 0.3242 | 93 |
| 0.1270 | 0.3159 | 94 |
| 0.1245 | 0.3275 | 95 |
| 0.1244 | 0.3218 | 96 |
| 0.1248 | 0.3270 | 97 |
| 0.1241 | 0.3354 | 98 |
| 0.1231 | 0.3430 | 99 |
| 0.1233 | 0.3318 | 100 |
| 0.1222 | 0.3387 | 101 |
| 0.1225 | 0.3367 | 102 |
| 0.1221 | 0.3501 | 103 |
| 0.1214 | 0.3370 | 104 |
| 0.1207 | 0.3391 | 105 |
| 0.1197 | 0.3436 | 106 |
| 0.1193 | 0.3388 | 107 |
| 0.1208 | 0.3383 | 108 |
| 0.1186 | 0.3526 | 109 |
| 0.1177 | 0.3471 | 110 |
| 0.1179 | 0.3490 | 111 |
| 0.1179 | 0.3498 | 112 |
| 0.1177 | 0.3379 | 113 |
| 0.1169 | 0.3518 | 114 |
| 0.1165 | 0.3590 | 115 |
| 0.1161 | 0.3550 | 116 |
| 0.1159 | 0.3545 | 117 |
| 0.1150 | 0.3562 | 118 |
| 0.1123 | 0.3641 | 119 |
| 0.1137 | 0.3658 | 120 |
| 0.1153 | 0.3613 | 121 |
| 0.1130 | 0.3767 | 122 |
| 0.1129 | 0.3812 | 123 |
| 0.1127 | 0.3696 | 124 |
| 0.1118 | 0.3704 | 125 |
| 0.1116 | 0.3689 | 126 |
| 0.1107 | 0.3776 | 127 |
| 0.1103 | 0.3775 | 128 |
| 0.1108 | 0.3803 | 129 |
| 0.1097 | 0.3877 | 130 |
| 0.1093 | 0.3860 | 131 |
| 0.1080 | 0.3919 | 132 |
| 0.1082 | 0.3886 | 133 |
| 0.1091 | 0.3888 | 134 |
| 0.1071 | 0.3931 | 135 |
| 0.1072 | 0.3925 | 136 |
| 0.1069 | 0.3933 | 137 |
| 0.1065 | 0.3940 | 138 |
| 0.1072 | 0.3919 | 139 |
| 0.1059 | 0.3944 | 140 |
| 0.1049 | 0.4003 | 141 |
| 0.1045 | 0.4060 | 142 |
| 0.1040 | 0.4025 | 143 |
| 0.1055 | 0.3955 | 144 |
| 0.1033 | 0.4048 | 145 |
| 0.1033 | 0.4029 | 146 |
| 0.1019 | 0.4061 | 147 |
| 0.1030 | 0.4104 | 148 |
| 0.1015 | 0.4146 | 149 |
### Framework versions
- Transformers 4.33.3
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
tomaarsen/span-marker-xlm-roberta-base-multinerd
|
tomaarsen
| 2023-09-29T19:55:02Z | 20 | 35 |
span-marker
|
[
"span-marker",
"pytorch",
"tensorboard",
"safetensors",
"token-classification",
"ner",
"named-entity-recognition",
"multilingual",
"dataset:Babelscape/multinerd",
"license:cc-by-nc-sa-4.0",
"model-index",
"region:us"
] |
token-classification
| 2023-08-02T07:39:20Z |
---
license: cc-by-nc-sa-4.0
library_name: span-marker
tags:
- span-marker
- token-classification
- ner
- named-entity-recognition
pipeline_tag: token-classification
widget:
- text: "Amelia Earthart voló su Lockheed Vega 5B monomotor a través del Océano Atlántico hasta París ."
example_title: "Spanish"
- text: "Amelia Earhart flew her single engine Lockheed Vega 5B across the Atlantic to Paris ."
example_title: "English"
- text: "Amelia Earthart a fait voler son monomoteur Lockheed Vega 5B à travers l' ocean Atlantique jusqu'à Paris ."
example_title: "French"
- text: "Amelia Earthart flog mit ihrer einmotorigen Lockheed Vega 5B über den Atlantik nach Paris ."
example_title: "German"
- text: "Амелия Эртхарт перелетела на своем одномоторном самолете Lockheed Vega 5B через Атлантический океан в Париж ."
example_title: "Russian"
- text: "Amelia Earthart vloog met haar één-motorige Lockheed Vega 5B over de Atlantische Oceaan naar Parijs ."
example_title: "Dutch"
- text: "Amelia Earthart przeleciała swoim jednosilnikowym samolotem Lockheed Vega 5B przez Ocean Atlantycki do Paryża ."
example_title: "Polish"
- text: "Amelia Earthart flaug eins hreyfils Lockheed Vega 5B yfir Atlantshafið til Parísar ."
example_title: "Icelandic"
- text: "Η Amelia Earthart πέταξε το μονοκινητήριο Lockheed Vega 5B της πέρα από τον Ατλαντικό Ωκεανό στο Παρίσι ."
example_title: "Greek"
model-index:
- name: SpanMarker w. xlm-roberta-base on MultiNERD by Tom Aarsen
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
type: Babelscape/multinerd
name: MultiNERD
split: test
revision: 2814b78e7af4b5a1f1886fe7ad49632de4d9dd25
metrics:
- type: f1
value: 0.91314
name: F1
- type: precision
value: 0.91994
name: Precision
- type: recall
value: 0.90643
name: Recall
datasets:
- Babelscape/multinerd
language:
- multilingual
metrics:
- f1
- recall
- precision
---
# SpanMarker for Named Entity Recognition
**Note**: Due to major [tokenization limitations](#Limitations), this model is deprecated in favor of the much superior [tomaarsen/span-marker-mbert-base-multinerd](https://huggingface.co/tomaarsen/span-marker-mbert-base-multinerd) model.
This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model that can be used for multilingual Named Entity Recognition trained on the [MultiNERD](https://huggingface.co/datasets/Babelscape/multinerd) dataset. In particular, this SpanMarker model uses [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) as the underlying encoder. See [train.py](train.py) for the training script.
## Metrics
| **Language** | **Precision** | **Recall** | **F1** |
|--------------|---------------|------------|------------|
| **all** | 91.99 | 90.64 | **91.31** |
| **de** | 93.56 | 93.87 | **93.77** |
| **en** | 94.01 | 95.10 | **94.55** |
| **es** | 92.58 | 89.13 | **90.82** |
| **fr** | 93.23 | 88.68 | **90.90** |
| **it** | 90.23 | 92.60 | **93.40** |
| **nl** | 93.61 | 91.36 | **92.47** |
| **pl** | 92.51 | 90.81 | **91.66** |
| **pt** | 93.29 | 90.22 | **91.73** |
| **ru** | 92.37 | 92.91 | **92.64** |
| **zh** | 83.23 | 81.55 | **82.38** |
## Label set
| Class | Description | Examples |
|-------|-------------|----------|
PER (person) | People | Ray Charles, Jessica Alba, Leonardo DiCaprio, Roger Federer, Anna Massey. |
ORG (organization) | Associations, companies, agencies, institutions, nationalities and religious or political groups | University of Edinburgh, San Francisco Giants, Google, Democratic Party. |
LOC (location) | Physical locations (e.g. mountains, bodies of water), geopolitical entities (e.g. cities, states), and facilities (e.g. bridges, buildings, airports). | Rome, Lake Paiku, Chrysler Building, Mount Rushmore, Mississippi River. |
ANIM (animal) | Breeds of dogs, cats and other animals, including their scientific names. | Maine Coon, African Wild Dog, Great White Shark, New Zealand Bellbird. |
BIO (biological) | Genus of fungus, bacteria and protoctists, families of viruses, and other biological entities. | Herpes Simplex Virus, Escherichia Coli, Salmonella, Bacillus Anthracis. |
CEL (celestial) | Planets, stars, asteroids, comets, nebulae, galaxies and other astronomical objects. | Sun, Neptune, Asteroid 187 Lamberta, Proxima Centauri, V838 Monocerotis. |
DIS (disease) | Physical, mental, infectious, non-infectious, deficiency, inherited, degenerative, social and self-inflicted diseases. | Alzheimer’s Disease, Cystic Fibrosis, Dilated Cardiomyopathy, Arthritis. |
EVE (event) | Sport events, battles, wars and other events. | American Civil War, 2003 Wimbledon Championships, Cannes Film Festival. |
FOOD (food) | Foods and drinks. | Carbonara, Sangiovese, Cheddar Beer Fondue, Pizza Margherita. |
INST (instrument) | Technological instruments, mechanical instruments, musical instruments, and other tools. | Spitzer Space Telescope, Commodore 64, Skype, Apple Watch, Fender Stratocaster. |
MEDIA (media) | Titles of films, books, magazines, songs and albums, fictional characters and languages. | Forbes, American Psycho, Kiss Me Once, Twin Peaks, Disney Adventures. |
PLANT (plant) | Types of trees, flowers, and other plants, including their scientific names. | Salix, Quercus Petraea, Douglas Fir, Forsythia, Artemisia Maritima. |
MYTH (mythological) | Mythological and religious entities. | Apollo, Persephone, Aphrodite, Saint Peter, Pope Gregory I, Hercules. |
TIME (time) | Specific and well-defined time intervals, such as eras, historical periods, centuries, years and important days. No months and days of the week. | Renaissance, Middle Ages, Christmas, Great Depression, 17th Century, 2012. |
VEHI (vehicle) | Cars, motorcycles and other vehicles. | Ferrari Testarossa, Suzuki Jimny, Honda CR-X, Boeing 747, Fairey Fulmar.
## Usage
To use this model for inference, first install the `span_marker` library:
```bash
pip install span_marker
```
You can then run inference with this model like so:
```python
from span_marker import SpanMarkerModel
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-xlm-roberta-base-multinerd")
# Run inference
entities = model.predict("Amelia Earhart flew her single engine Lockheed Vega 5B across the Atlantic to Paris.")
```
See the [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) repository for documentation and additional information on this library.
## Contributions
Many thanks to [Simone Tedeschi](https://huggingface.co/sted97) from [Babelscape](https://babelscape.com) for his insight when training this model and his involvement in the creation of the training dataset.
|
LarryAIDraw/gertrude_mix2
|
LarryAIDraw
| 2023-09-29T19:43:13Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-29T19:41:23Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/26531/arknights-gertrude
|
LarryAIDraw/oriana_thomason_v1
|
LarryAIDraw
| 2023-09-29T19:40:42Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-29T19:33:28Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/153715/oriana-thomson-toaru-majutsu-no-index
|
ProtonH/q-FrozenLake-v1-4x4-noSlippery
|
ProtonH
| 2023-09-29T19:40:17Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-29T19:40:14Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ProtonH/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
LarryAIDraw/Serena-10
|
LarryAIDraw
| 2023-09-29T19:40:04Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-29T19:32:28Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/153938/serena-pokemon-lora
|
LarryAIDraw/ak_jackie-000005
|
LarryAIDraw
| 2023-09-29T19:39:21Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-29T19:31:02Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/153849/jackie-or-arknights
|
LarryAIDraw/misaki
|
LarryAIDraw
| 2023-09-29T19:39:08Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-29T19:30:41Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/153850/misaki-sakimiya-or-dead-mount-death-play
|
CyberHarem/hoto_kokoa_istheorderarabbit
|
CyberHarem
| 2023-09-29T19:37:38Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/hoto_kokoa_istheorderarabbit",
"license:mit",
"region:us"
] |
text-to-image
| 2023-09-28T03:39:29Z |
---
license: mit
datasets:
- CyberHarem/hoto_kokoa_istheorderarabbit
pipeline_tag: text-to-image
tags:
- art
---
# Lora of hoto_kokoa_istheorderarabbit
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 8680, you need to download `8680/hoto_kokoa_istheorderarabbit.pt` as the embedding and `8680/hoto_kokoa_istheorderarabbit.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 8680**, with the score of 0.863. The trigger words are:
1. `hoto_kokoa_istheorderarabbit`
2. `orange_hair, blush, hair_ornament, smile, hairclip, purple_eyes, bangs, closed_mouth, indoors, short_hair, brown_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 9300 | 0.807 | [Download](9300/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9300/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9300/previews/nude.png) | [<NSFW, click to see>](9300/previews/nude2.png) |  |  |
| **8680** | **0.863** | [**Download**](8680/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8680/previews/nude.png) | [<NSFW, click to see>](8680/previews/nude2.png) |  |  |
| 8060 | 0.857 | [Download](8060/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8060/previews/nude.png) | [<NSFW, click to see>](8060/previews/nude2.png) |  |  |
| 7440 | 0.855 | [Download](7440/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7440/previews/nude.png) | [<NSFW, click to see>](7440/previews/nude2.png) |  |  |
| 6820 | 0.831 | [Download](6820/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6820/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6820/previews/nude.png) | [<NSFW, click to see>](6820/previews/nude2.png) |  |  |
| 6200 | 0.847 | [Download](6200/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6200/previews/nude.png) | [<NSFW, click to see>](6200/previews/nude2.png) |  |  |
| 5580 | 0.821 | [Download](5580/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5580/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5580/previews/nude.png) | [<NSFW, click to see>](5580/previews/nude2.png) |  |  |
| 4960 | 0.837 | [Download](4960/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4960/previews/nude.png) | [<NSFW, click to see>](4960/previews/nude2.png) |  |  |
| 4340 | 0.816 | [Download](4340/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4340/previews/nude.png) | [<NSFW, click to see>](4340/previews/nude2.png) |  |  |
| 3720 | 0.810 | [Download](3720/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3720/previews/nude.png) | [<NSFW, click to see>](3720/previews/nude2.png) |  |  |
| 3100 | 0.815 | [Download](3100/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3100/previews/nude.png) | [<NSFW, click to see>](3100/previews/nude2.png) |  |  |
| 2480 | 0.774 | [Download](2480/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2480/previews/nude.png) | [<NSFW, click to see>](2480/previews/nude2.png) |  |  |
| 1860 | 0.798 | [Download](1860/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1860/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1860/previews/nude.png) | [<NSFW, click to see>](1860/previews/nude2.png) |  |  |
| 1240 | 0.814 | [Download](1240/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1240/previews/nude.png) | [<NSFW, click to see>](1240/previews/nude2.png) |  |  |
| 620 | 0.722 | [Download](620/hoto_kokoa_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](620/previews/bondage.png) |  |  |  | [<NSFW, click to see>](620/previews/nude.png) | [<NSFW, click to see>](620/previews/nude2.png) |  |  |
|
abdelrahmanelo/Honadf
|
abdelrahmanelo
| 2023-09-29T19:19:15Z | 0 | 0 |
allennlp
|
[
"allennlp",
"art",
"text-classification",
"ar",
"dataset:fka/awesome-chatgpt-prompts",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-09-29T19:16:01Z |
---
license: apache-2.0
datasets:
- fka/awesome-chatgpt-prompts
language:
- ar
metrics:
- accuracy
library_name: allennlp
pipeline_tag: text-classification
tags:
- art
---
|
akashicmarga/Mistral-7B-Instruct-v0.1-q4f16_1-metal
|
akashicmarga
| 2023-09-29T19:17:13Z | 0 | 1 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-09-29T18:37:49Z |
---
license: apache-2.0
---
The model in this repository utilizes Mistral-7B-Instruct-v0.1 (https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1), the mlc-llm (https://llm.mlc.ai/docs/) Metal version with 4-bit quantization and an embedding layer for MLC embedding. You have the option to use the FastAPI server instead of OpenAI to run the model locally. For using in langchain, please refer to the sample_langchain.py file in the following GitHub link: https://github.com/mlc-ai/mlc-llm/blob/main/examples/rest/python/sample_langchain.py.
Environment setup
conda create -n mlc-chat-venv -c mlc-ai -c conda-forge mlc-chat-cli-nightly
conda activate mlc-chat-venv
Fast API Server
python -m mlc_chat.rest --model Mistral-7B-Instruct-v0.1-q4f16_1/ --lib-path Mistral-7B-Instruct-v0.1-q4f16_1/Mistral-7B-Instruct-v0.1-q4f16_1-metal.so
|
dyaminda/pneumonia-classification-02
|
dyaminda
| 2023-09-29T19:11:48Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"alexnet",
"image-classification",
"generated_from_trainer",
"custom_code",
"autotrain_compatible",
"region:us"
] |
image-classification
| 2023-09-28T19:56:20Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: pneumonia-classification-02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pneumonia-classification-02
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1321
- Accuracy: 0.9474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 50
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4043 | 0.99 | 52 | 0.3141 | 0.8747 |
| 0.2279 | 2.0 | 105 | 0.1656 | 0.9439 |
| 0.1707 | 2.99 | 157 | 0.1481 | 0.9332 |
| 0.1691 | 4.0 | 210 | 0.1305 | 0.9570 |
| 0.1337 | 4.95 | 260 | 0.1244 | 0.9475 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
pamelapaolacb/pruebaModeloTFM_DistilBert_in
|
pamelapaolacb
| 2023-09-29T18:54:58Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:distilbert/distilbert-base-cased-distilled-squad",
"base_model:finetune:distilbert/distilbert-base-cased-distilled-squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-09-29T14:55:16Z |
---
license: apache-2.0
base_model: distilbert-base-cased-distilled-squad
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: pruebaModeloTFM_DistilBert_in
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pruebaModeloTFM_DistilBert_in
This model is a fine-tuned version of [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
roa7n/gpt2-human_nontata_promoters-randomized_9_layers_3e-05_lr_8_e
|
roa7n
| 2023-09-29T18:16:19Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-29T18:16:17Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
ProtonH/ppo-Huggy
|
ProtonH
| 2023-09-29T18:16:04Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-09-29T18:15:53Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ProtonH/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
shahidul034/Medical_Llama_2
|
shahidul034
| 2023-09-29T18:14:05Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-29T17:57:00Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
```
import torch
from peft import PeftModel
import transformers
import textwrap
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
from transformers.generation.utils import GreedySearchDecoderOnlyOutput
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
DEVICE
tokenizer = LlamaTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
model = LlamaForCausalLM.from_pretrained(
"meta-llama/Llama-2-7b-hf",
load_in_8bit=True,
device_map="auto",
)
model = PeftModel.from_pretrained(model, "my-llm", torch_dtype=torch.float16)
model.config.pad_token_id = tokenizer.pad_token_id = 0 # unk
model.config.bos_token_id = 1
model.config.eos_token_id = 2
model = model.eval()
model = torch.compile(model)
PROMPT_TEMPLATE = f"""
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
[INSTRUCTION]
### Response:
"""
def create_prompt(instruction: str) -> str:
return PROMPT_TEMPLATE.replace("[INSTRUCTION]", instruction)
print(create_prompt("What is (are) Glaucoma ?"))
def generate_response(prompt: str, model: PeftModel) -> GreedySearchDecoderOnlyOutput:
encoding = tokenizer(prompt, return_tensors="pt")
input_ids = encoding["input_ids"].to(DEVICE)
generation_config = GenerationConfig(
temperature=0.1,
top_p=0.75,
repetition_penalty=1.1,
)
with torch.inference_mode():
return model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=256,
)
def format_response(response: GreedySearchDecoderOnlyOutput) -> str:
decoded_output = tokenizer.decode(response.sequences[0])
response = decoded_output.split("### Response:")[1].strip()
return "\n".join(textwrap.wrap(response))
def ask_alpaca(prompt: str, model: PeftModel = model) -> str:
prompt = create_prompt(prompt)
response = generate_response(prompt, model)
print(format_response(response))
ask_alpaca("What is (are) Glaucoma ?")
```
```
autotrain llm --train --project_name my-llm --model meta-llama/Llama-2-7b-hf --data_path "data" --train_split "train" --text_column "text" --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 10 --num_train_epochs 3 --trainer sft
--use_flash_attention_2
```
https://www.mlexpert.io/machine-learning/tutorials/alpaca-and-llama-inference
|
LemTenku/sister-Bee
|
LemTenku
| 2023-09-29T18:10:39Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2306.02707",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-29T17:30:06Z |
---
license: apache-2.0
pipeline_tag: text-generation
language:
- en
library_name: transformers
---
Change from Synthia-7B-v1.2 -> Synthia-7B-v1.3: Base model was changed from LLaMA-2-7B to Mistral-7B-v0.1
All Synthia models are uncensored. Please use it with caution and with best intentions. You are responsible for how you use Synthia.
To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message:
```
Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
```
# Synthia-7B-v1.3
SynthIA (Synthetic Intelligent Agent) 7B-v1.3 is a Mistral-7B-v0.1 model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations.
<br>

<br>
<br>
#### License Disclaimer:
This model is released under Apache 2.0, and comes with no warranty or gurantees of any kind.
<br>
## Evaluation
We evaluated Synthia-7B-v1.3 on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|*arc_challenge*|acc_norm|0.6237|
|*hellaswag*|acc_norm|0.8349|
|*mmlu*|acc_norm|0.6232|
|*truthfulqa_mc*|mc2|0.5125|
|**Total Average**|-|**0.6485**||
<br>
## Example Usage
### Here is prompt format:
```
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: How is a rocket launched from the surface of the earth to Low Earth Orbit?
ASSISTANT:
```
### Below shows a code example on how to use this model:
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "migtissera/Synthia-7B-v1.3"
output_file_path = "./Synthia-7B-conversations.jsonl"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
answer = string.split("USER:")[0].strip()
return f"{answer}"
conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}"
json_data = {"prompt": user_input, "answer": answer}
## Save your conversation
with open(output_file_path, "a") as output_file:
output_file.write(json.dumps(json_data) + "\n")
```
<br>
#### Limitations & Biases:
While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
Exercise caution and cross-check information when necessary. This is an uncensored model.
<br>
### Citiation:
Please kindly cite using the following BibTeX:
```
@misc{Synthia-7B-v1.3,
author = {Migel Tissera},
title = {Synthia-7B-v1.3: Synthetic Intelligent Agent},
year = {2023},
publisher = {GitHub, HuggingFace},
journal = {GitHub repository, HuggingFace repository},
howpublished = {\url{https://huggingface.co/migtissera/Synthia-13B},
}
```
```
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
osiria/distiluse-base-italian
|
osiria
| 2023-09-29T18:07:35Z | 127 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"distilbert",
"feature-extraction",
"it",
"arxiv:1907.04307",
"arxiv:2010.05609",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-06-11T21:23:41Z |
---
license: apache-2.0
language:
- it
---
--------------------------------------------------------------------------------------------------
<body>
<span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;"> Model: DistilUSE</span>
<br>
<span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;"> Lang: IT</span>
<br>
<span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span>
</body>
--------------------------------------------------------------------------------------------------
<h3>Model description</h3>
This is a <b>Universal Sentence Encoder</b> <b>[1]</b> model for the <b>Italian</b> language, obtained using <b>mDistilUSE</b> ([distiluse-base-multilingual-cased-v1](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1)) as a starting point and focusing it on the Italian language by modifying the embedding layer
(as in <b>[2]</b>, computing document-level frequencies over the <b>Wikipedia</b> dataset)
The resulting model has 67M parameters, a vocabulary of 30.785 tokens, and a size of ~270 MB.
It can be used to encode Italian texts and compute similarities between them.
<h3>Quick usage</h3>
```python
from transformers import AutoTokenizer, AutoModel
import numpy as np
tokenizer = AutoTokenizer.from_pretrained("osiria/distiluse-base-italian")
model = AutoModel.from_pretrained("osiria/distiluse-base-italian")
text1 = "Alessandro Manzoni è stato uno scrittore italiano"
text2 = "Giacomo Leopardi è stato un poeta italiano"
vec1 = model(tokenizer.encode(text1, return_tensors = "pt")).last_hidden_state[0,0,:].cpu().detach().numpy()
vec2 = model(tokenizer.encode(text2, return_tensors = "pt")).last_hidden_state[0,0,:].cpu().detach().numpy()
cosine_similarity = np.dot(vec1, vec2)/(np.linalg.norm(vec1)*np.linalg.norm(vec2))
print("COSINE SIMILARITY:", cosine_similarity)
# COSINE SIMILARITY: 0.734292
```
<h3>References</h3>
[1] https://arxiv.org/abs/1907.04307
[2] https://arxiv.org/abs/2010.05609
<h3>License</h3>
The model is released under <b>Apache-2.0</b> license
|
osiria/diablo-italian-base-1.3b
|
osiria
| 2023-09-29T18:07:22Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xglm",
"text-generation",
"it",
"arxiv:2005.14165",
"arxiv:2112.10668",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-29T20:32:54Z |
---
license: mit
language:
- it
pipeline_tag: text-generation
---
--------------------------------------------------------------------------------------------------
<body>
<span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;"> Model: DIABLO 1.3B 🔥</span>
<br>
<span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;"> Lang: IT</span>
<br>
<span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span>
</body>
--------------------------------------------------------------------------------------------------
<h3>Model description</h3>
This model is a <b>causal</b> language model for the <b>Italian</b> language, based on a GPT-like <b>[1]</b> architecture (more specifically, the model has been obtained by modifying Meta's XGLM architecture <b>[2]</b> and exploiting its 1.7B checkpoint).
The model has ~1.3B parameters and a vocabulary of 50.335 tokens. It is a foundation model, pre-trained for causal language modeling, so it is mainly suitable for basic natural language generation, and you will have to fine-tune it in order to use it on more specific downstream tasks.
<h3>Quick usage</h3>
In order to use the model for inference on GPU, the following pipeline is needed:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("osiria/diablo-italian-base-1.3b")
model = AutoModelForCausalLM.from_pretrained("osiria/diablo-italian-base-1.3b", torch_dtype=torch.float16)
device = torch.device("cuda")
model = model.to(device)
pipeline_nlg = pipeline("text-generation", model = model, tokenizer = tokenizer, device = 0)
pipeline_nlg("Ciao, mi chiamo Marco Rossi e")
# [{'generated_text': 'Ciao, mi chiamo Marco Rossi e sono un blogger italiano.'}]
```
<h3>Limitations</h3>
The model might behave erratically when presented with prompts which are too far away from its pre-training and, because of the probabilistic nature of its generation, it might occasionally produce biased or offensive content with respect to gender, race, ideologies, and political or religious beliefs.
These limitations imply that the model and its outputs should be used with caution, and should not be involved in situations that require the generated text to be fair or true.
<h3>References</h3>
[1] https://arxiv.org/abs/2005.14165
[2] https://arxiv.org/abs/2112.10668
<h3>License</h3>
The model is released under <b>MIT</b> license
|
osiria/bert-base-italian-cased
|
osiria
| 2023-09-29T18:07:17Z | 152 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"it",
"arxiv:1810.04805",
"arxiv:2010.05609",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-05-29T17:52:45Z |
---
license: apache-2.0
language:
- it
widget:
- text: "Milano è una [MASK] dell'Italia"
example_title: "Example 1"
- text: "Giacomo Leopardi è stato uno dei più grandi [MASK] del classicismo italiano"
example_title: "Example 2"
- text: "La pizza è un piatto tipico della [MASK] gastronomica italiana"
example_title: "Example 3"
---
--------------------------------------------------------------------------------------------------
<body>
<span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;"> Model: BERT</span>
<br>
<span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;"> Lang: IT</span>
<br>
<span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span>
</body>
--------------------------------------------------------------------------------------------------
<h3>Model description</h3>
This is a <b>BERT</b> <b>[1]</b> model for the <b>Italian</b> language, obtained using <b>mBERT</b> ([bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased)) as a starting point and focusing it on the Italian language by modifying the embedding layer
(as in <b>[2]</b>, computing document-level frequencies over the <b>Wikipedia</b> dataset)
The resulting model has 110M parameters, a vocabulary of 30.785 tokens, and a size of ~430 MB.
<h3>Quick usage</h3>
```python
from transformers import BertTokenizerFast, BertModel
tokenizer = BertTokenizerFast.from_pretrained("osiria/bert-base-italian-cased")
model = BertModel.from_pretrained("osiria/bert-base-italian-cased")
```
<h3>References</h3>
[1] https://arxiv.org/abs/1810.04805
[2] https://arxiv.org/abs/2010.05609
<h3>License</h3>
The model is released under <b>Apache-2.0</b> license
|
actionpace/13B-Thorns-l2
|
actionpace
| 2023-09-29T17:49:47Z | 1 | 0 | null |
[
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-09-07T18:38:21Z |
---
license: other
language:
- en
---
**Some of my own quants:**
* 13B-Thorns-l2_Q4_K_M.gguf
* 13B-Thorns-l2_Q5_K_M.gguf
**Source:** [CalderaAI](https://huggingface.co/CalderaAI)
**Source Model:** [13B-Thorns-l2](https://huggingface.co/CalderaAI/13B-Thorns-l2)
**Source models for CalderaAI/13B-Thorns-l2 (Merge)**
- [NousResearch/Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b) ([Ref](https://huggingface.co/actionpace/Nous-Hermes-Llama2-13b))
- [elinas/chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2) ([Ref](https://huggingface.co/actionpace/chronos-13b-v2))
- [garage-bAInd/Platypus2-13B](https://huggingface.co/garage-bAInd/Platypus2-13B) ([Ref](https://huggingface.co/actionpace/Platypus2-13B))
- [jondurbin/airoboros-l2-13b-gpt4-1.4.1](https://huggingface.co/jondurbin/airoboros-l2-13b-gpt4-1.4.1)
- [KoboldAI/LLAMA2-13B-Holodeck-1](https://huggingface.co/KoboldAI/LLAMA2-13B-Holodeck-1) ([Ref](https://huggingface.co/actionpace/LLAMA2-13B-Holodeck-1))
- [nRuaif/Kimiko-v2-13B](https://huggingface.co/nRuaif/Kimiko-v2-13B) (Lora)
- [lemonilia/limarp-llama2](https://huggingface.co/lemonilia/limarp-llama2) (Lora)
|
asmaa1/videomae-base-groub19-20-finetuned-SLT-subset
|
asmaa1
| 2023-09-29T17:44:00Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-09-29T06:19:30Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-groub19-20-finetuned-SLT-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-groub19-20-finetuned-SLT-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1970
- Accuracy: 0.1220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.853 | 0.14 | 11 | 3.6435 | 0.0732 |
| 3.7412 | 1.14 | 22 | 3.5800 | 0.0732 |
| 3.7045 | 2.14 | 33 | 3.4833 | 0.1220 |
| 3.487 | 3.14 | 44 | 3.3655 | 0.1220 |
| 3.4174 | 4.14 | 55 | 3.2769 | 0.1220 |
| 3.3735 | 5.14 | 66 | 3.2278 | 0.1220 |
| 3.3319 | 6.14 | 77 | 3.1988 | 0.1220 |
| 3.1906 | 7.04 | 80 | 3.1970 | 0.1220 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0+cpu
- Datasets 2.1.0
- Tokenizers 0.13.3
|
AparnaMahajan/Llama2
|
AparnaMahajan
| 2023-09-29T17:42:06Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-27T04:04:27Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
Gayathri142214002/t5_Question_Generation_3
|
Gayathri142214002
| 2023-09-29T17:23:33Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-27T06:21:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5_Question_Generation_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_Question_Generation_3
This model is a fine-tuned version of [Gayathri142214002/t5_Question_Generation_2](https://huggingface.co/Gayathri142214002/t5_Question_Generation_2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5741 | 0.39 | 500 | 0.4854 |
| 0.5319 | 0.78 | 1000 | 0.4408 |
| 0.4804 | 1.17 | 1500 | 0.4402 |
| 0.4163 | 1.57 | 2000 | 0.4260 |
| 0.4199 | 1.96 | 2500 | 0.4183 |
| 0.355 | 2.35 | 3000 | 0.4318 |
| 0.3643 | 2.74 | 3500 | 0.4217 |
| 0.3437 | 3.13 | 4000 | 0.4291 |
| 0.3187 | 3.52 | 4500 | 0.4280 |
| 0.3294 | 3.91 | 5000 | 0.4160 |
| 0.2915 | 4.31 | 5500 | 0.4248 |
| 0.2949 | 4.7 | 6000 | 0.4236 |
| 0.2902 | 5.09 | 6500 | 0.4176 |
| 0.267 | 5.48 | 7000 | 0.4244 |
| 0.2722 | 5.87 | 7500 | 0.4216 |
| 0.2537 | 6.26 | 8000 | 0.4269 |
| 0.2532 | 6.65 | 8500 | 0.4250 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Implementacion/distilbert-base-uncased-finetuned-squad
|
Implementacion
| 2023-09-29T17:12:07Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-09-29T17:11:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
adutchscotsman/ppo-Huggy
|
adutchscotsman
| 2023-09-29T17:11:04Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-09-29T17:10:55Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: adutchscotsman/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
etonkou/swahili
|
etonkou
| 2023-09-29T17:00:37Z | 272 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-29T16:54:28Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: swahili
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swahili
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0009
- Wer: 0.6055
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.723 | 1.5 | 1000 | 1.1521 | 0.7429 |
| 0.8457 | 3.0 | 2000 | 1.2019 | 0.7280 |
| 0.6465 | 4.5 | 3000 | 1.0385 | 0.6602 |
| 0.5081 | 6.0 | 4000 | 0.9303 | 0.6310 |
| 0.3864 | 7.5 | 5000 | 1.0838 | 0.6240 |
| 0.3109 | 9.0 | 6000 | 1.0009 | 0.6055 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
dracero/a2c-PandaReachDense-v3
|
dracero
| 2023-09-29T16:51:27Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-29T16:45:58Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.25 +/- 0.13
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kbooth-insight/booth-test
|
kbooth-insight
| 2023-09-29T16:51:26Z | 29 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-29T16:46:18Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### booth-test Dreambooth model trained by kbooth-insight with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
eugene6/poca-SoccerTwos
|
eugene6
| 2023-09-29T16:51:12Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-09-29T16:42:40Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: eugene6/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
usvsnsp/pythia-2.8b-ppo
|
usvsnsp
| 2023-09-29T16:50:46Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-27T15:42:56Z |
Wandb run: https://wandb.ai/eleutherai/pythia-rlhf/runs/rh4mnzmr
Eval Results:
| Tasks |Version|Filter| Metric |Value | |Stderr|
|--------------|-------|------|----------|-----:|---|-----:|
|arc_challenge |Yaml |none |acc |0.2884|± |0.0132|
| | |none |acc_norm |0.3183|± |0.0136|
|arc_easy |Yaml |none |acc |0.6124|± |0.0100|
| | |none |acc_norm |0.5328|± |0.0102|
|lambada_openai|Yaml |none |perplexity|8.7783|± |0.2341|
| | |none |acc |0.5783|± |0.0069|
|logiqa |Yaml |none |acc |0.2151|± |0.0161|
| | |none |acc_norm |0.2826|± |0.0177|
|piqa |Yaml |none |acc |0.7176|± |0.0105|
| | |none |acc_norm |0.7176|± |0.0105|
|sciq |Yaml |none |acc |0.8590|± |0.0110|
| | |none |acc_norm |0.7790|± |0.0131|
|winogrande |Yaml |none |acc |0.5959|± |0.0138|
|wsc |Yaml |none |acc |0.3654|± |0.0474|
|
fallen01/myproject
|
fallen01
| 2023-09-29T16:47:18Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-29T16:41:57Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### myproject Dreambooth model trained by fallen01 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: SSET-126
Sample pictures of this concept:

|
language-ml-lab/postagger-azb
|
language-ml-lab
| 2023-09-29T16:41:02Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"az",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-26T16:29:59Z |
---
pipeline_tag: token-classification
widget:
- text: سن نجورسن؟
example_title: Example 1
- text: من سنی سویرم.
example_title: Example 2
- text: سن شاهین قیزین چوخ سئویرسن.
example_title: Example 3
- text: آلما آلیب گلرم، سن هئچ بیر شی آلما.
example_title: Example 4
language:
- az
metrics:
- accuracy
- f1
---
# POS Tagger
- Type: Fine-tuned BERT-based Part-of-Speech (POS) tagging model
- Description: This model has been fine-tuned using [AzerBERT](https://huggingface.co/language-ml-lab/AzerBert) for part-of-speech tagging tasks in Iranian Azerbaijani text. It can be used to annotate text with 11 POS tags, which is essential for various downstream NLP applications.
## How to use
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("token-classification", model="language-ml-lab/postagger-azb")
```
```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("language-ml-lab/postagger-azb")
model = AutoModelForTokenClassification.from_pretrained("language-ml-lab/postagger-azb")
```
|
RsGoksel/Breast-Tumor-Mass-Detection
|
RsGoksel
| 2023-09-29T16:38:12Z | 0 | 0 | null |
[
"Cancer",
"Tumour",
"Breast",
"Mammography",
"Mass",
"object-detection",
"license:apache-2.0",
"region:us"
] |
object-detection
| 2023-09-29T16:14:39Z |
---
license: apache-2.0
pipeline_tag: object-detection
tags:
- Cancer
- Tumour
- Breast
- Mammography
- Mass
---
## Introduction
The Breast Mass Object Detection Model is designed to detect breast masses in mammography.
- **Developed by:** https://github.com/RsGoksel
### More Tools
- **Repository:** https://github.com/RsGoksel/Breast-Tissue-Cropper-Tools
|
RogerB/afro-xlmr-large-kinyarwanda-finetuned-kinyarwanda-tweets-finetuned
|
RogerB
| 2023-09-29T16:35:30Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:RogerB/afro-xlmr-large-kinyarwanda-finetuned",
"base_model:finetune:RogerB/afro-xlmr-large-kinyarwanda-finetuned",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-29T16:21:32Z |
---
license: mit
base_model: RogerB/afro-xlmr-large-kinyarwanda-finetuned
tags:
- generated_from_trainer
model-index:
- name: afro-xlmr-large-kinyarwanda-finetuned-kinyarwanda-tweets-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-large-kinyarwanda-finetuned-kinyarwanda-tweets-finetuned
This model is a fine-tuned version of [RogerB/afro-xlmr-large-kinyarwanda-finetuned](https://huggingface.co/RogerB/afro-xlmr-large-kinyarwanda-finetuned) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7567
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0292 | 1.0 | 500 | 1.9115 |
| 1.9227 | 2.0 | 1000 | 1.8062 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
flyingfishinwater/chinese-baby-llama2
|
flyingfishinwater
| 2023-09-29T16:33:10Z | 102 | 14 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text2text-generation",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-01T16:43:49Z |
---
license: apache-2.0
language:
- zh
pipeline_tag: text2text-generation
---
# 中文微型Llama2基础模型
[English](./readme_en.md) [简体中文](./readme.md)
这是一个参数量115M左右的超微型小模型,采用Llama2架构,这里上传的版本是预训练版本,尚未进行SFT。近期将会推出SFT后的聊天版本。
这个超微型模型开发的目标是:
1. 演练从0开始预训练一个基础大语言模型的全过程
2. 为开发大参数模型提供了一个可快速部署的环境,毕竟加载大模型非常耗时,不利于快速的迭代开发和调试
3. 可以在消费级显卡上快速的调优参数,重现各种论文中的优化算法。
## 训练数据:
收集了429本中文网络玄幻小说,整理为txt纯文本,除掉字符数少于10的行和超过4096字符的行,作为预训练的基础数据。
整理后的txt文件尺寸是3.3G,包含868M中文字符,18M行
## 中文分词器:
模型的分词器(tokenizer)也是重新训练的,没有使用现有的分词器。
训练参数:
1. 最长行(Max Sentence Length): 2657
2. 词汇量(Vocab Size): 32000
3. 正则化规则(Normalization Rule): identity
4. 覆盖率(Character coverage): 0.9995
和标准的Llama2分词器比较如下:
| | Llama2 | Baby Llama2 |
| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
| tokens | 32000 | 65534 |
| model_max_length | 4096 | 4096 |
| 白日依山尽,黄河入海流。欲穷千里目,更上一层楼。 | :['▁', '白', '日', '<0xE4>', '<0xBE>', '<0x9D>', '山', '<0xE5>', '<0xB0>', '<0xBD>', ',', '黄', '河', '入', '海', '流', '。', '<0xE6>', '<0xAC>', '<0xB2>', '<0xE7>', '<0xA9>', '<0xB7>', '千', '里', '目', ',', '更', '上', '一', '<0xE5>', '<0xB1>', '<0x82>', '<0xE6>', '<0xA5>', '<0xBC>', '。'] | ['▁白', '日', '依山', '尽', ',', '黄河', '入海', '流', '。', '欲', '穷', '千里', '目', ',', '更', '上一层', '楼', '。'] |
| | [1, 29871, 30868, 30325, 231, 193, 160, 30329, 232, 179, 192, 30214, 31491, 30828, 30752, 30581, 31151, 30267, 233, 175, 181, 234, 172, 186, 31159, 30755, 30895, 30214, 31100, 30429, 30287, 232, 180, 133, 233, 168, 191, 30267] | [65534, 1764, 63106, 62484, 63203, 62793, 14729, 29082, 63130, 62795, 63920, 64266, 3271, 63038, 62793, 63007, 17116, 63636, 62795] |
| The primary use of LLaMA is research on large language models, including BERT, XLNet, and RoBERTa. | :['▁The', '▁primary', '▁use', '▁of', '▁L', 'La', 'MA', '▁is', '▁research', '▁on', '▁large', '▁language', '▁models', ',', '▁including', '▁B', 'ERT', ',', '▁X', 'L', 'Net', ',', '▁and', '▁Ro', 'BER', 'T', 'a', '.'] | :['▁T', 'h', 'e', '▁p', 'ri', 'm', 'ar', 'y', '▁', 'u', 'se', '▁o', 'f', '▁', '<0x4C>', '<0x4C>', 'a', 'M', 'A', '▁i', 's', '▁', 're', 'se', 'ar', 'ch', '▁o', 'n', '▁', 'l', 'ar', 'g', 'e', '▁', 'l', 'ang', 'ua', 'g', 'e', '▁m', 'od', 'e', 'ls', ',', '▁', 'in', 'c', 'lu', 'd', 'i', 'ng', '▁', '<0x42>', '<0x45>', '<0x52>', 'T', ',', '▁', 'X', '<0x4C>', '<0x4E>', 'e', 't', ',', '▁', 'an', 'd', '▁', '<0x52>', 'o', '<0x42>', '<0x45>', '<0x52>', 'T', 'a', '.'] |
| | [1, 450, 7601, 671, 310, 365, 5661, 1529, 338, 5925, 373, 2919, 4086, 4733, 29892, 3704, 350, 20161, 29892, 1060, 29931, 6779, 29892, 322, 1528, 13635, 29911, 29874, 29889] | [65534, 14962, 63590, 64211, 27052, 16426, 63475, 13594, 64158, 62797, 63569, 11279, 13719, 65368, 62797, 81, 81, 63518, 64918, 64752, 24145, 63338, 62797, 44186, 11279, 13594, 9251, 13719, 63541, 62797, 64399, 13594, 64101, 64211, 62797, 64399, 37035, 36500, 64101, 64211, 2939, 11320, 64211, 53670, 62793, 62797, 18944, 63603, 14575, 64096, 63484, 1171, 62797, 71, 74, 87, 64760, 62793, 62797, 65257, 81, 83, 64211, 63073, 62793, 62797, 6604, 64096, 62797, 87, 63143, 71, 74, 87, 64760, 63518, 62801] |
Llama2分词器是32000个token,针对英文字符进行了优化;而Baby LLama2是65534个token,只包括了中文。
可以看到针对中文文本和英文文本的向量化比较上,Baby Llama2中文向量化优于标准Llama2,而英文向量化弱于Llama2。
## 全量训练语料处理
全量训练前,先对语料进行向量化处理。用刚刚训练的分词器(tokenzier)逐行读取网络小说的txt文件,每一行都做向量化,并在行尾增加eos_token_id做区分。然后将所有处理好的二进制数据以二维np.uint16数组的形式存储到磁盘上,数据维度为[-1: max_sentence_length]
## 预训练
在单卡3090机器上进行预训练,模型model采用了llama2的架构,训练参数如下:
1. max_seq_len = 1024
2. dim = 768
3. n_headers = 12
4. n_layers = 12
5. n_kv_headers = 12
## 演示
[Huggingface Space For Baby Llama2](https://huggingface.co/spaces/wangqi777/wangqi777-chinese-baby-llama2)
## [TODO]
1. 模型源代码将于整理后开放到github上
2. 增加SFT微调,使其能够进行对话
## 鸣谢
[llama2.c](https://github.com/karpathy/llama2.c)
[baby-llama2-chinese](baby-llama2-chinese](https://github.com/DLLXW/baby-llama2-chinese)
|
twm213/food_classifier
|
twm213
| 2023-09-29T16:32:47Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-29T16:16:06Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: twm213/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# twm213/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3748
- Validation Loss: 0.3432
- Train Accuracy: 0.914
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.7859 | 1.6483 | 0.799 | 0 |
| 1.2220 | 0.9133 | 0.842 | 1 |
| 0.7054 | 0.5449 | 0.898 | 2 |
| 0.4945 | 0.4446 | 0.892 | 3 |
| 0.3748 | 0.3432 | 0.914 | 4 |
### Framework versions
- Transformers 4.33.3
- TensorFlow 2.9.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
roa7n/gpt2-human_nontata_promoters-randomized_9_layers_0.0003_lr_8_e
|
roa7n
| 2023-09-29T16:27:12Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-29T16:27:10Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
RsGoksel/Breast-Mammography-Detection
|
RsGoksel
| 2023-09-29T16:26:12Z | 0 | 0 | null |
[
"Breast",
"Mammography",
"ROI",
"Medical",
"image-classification",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2023-09-28T09:45:16Z |
---
license: apache-2.0
pipeline_tag: image-classification
tags:
- Breast
- Mammography
- ROI
- Medical
---
# Breast Tissue ROI Object Detection Model
## Introduction:
The Breast Tissue ROI Object Detection Model is designed to locate regions of interest (ROIs) within mammographic images.
### 1. Purpose
The primary purpose of the Breast Tissue ROI Object Detection Model is to accurately and efficiently identify regions of interest in mammographic images. These regions typically contain suspicious lesions, calcifications, or abnormalities that require further examination to determine the presence of breast cancer.
## 2. Deep Learning Architecture:
This model is built on state-of-the-art deep learning architecture, leveraging Convolutional Neural Networks (CNNs) (with feature extraction). It utilizes a combination of convolutional layers, pooling layers, and fully connected layers to process mammographic images effectively.
- **Developed by:** https://github.com/RsGoksel
- **Model type:** Pytorch (.pt)
### More Tools
- **Repository:** https://github.com/RsGoksel/Breast-Tissue-Cropper-Tools

|
Tensoic/Llama-2-7B-alpaca-2k-test
|
Tensoic
| 2023-09-29T16:21:00Z | 4 | 0 |
peft
|
[
"peft",
"llama",
"dataset:mhenrichsen/alpaca_2k_test",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2023-09-07T16:13:54Z |
---
library_name: peft
datasets:
- mhenrichsen/alpaca_2k_test
---
Greetings traveler! We trained this LORA adapter for base `Llama-2-7b-hf` on the `henrichsen/alpaca_2k_test` dataset.
Full merged weights available on : https://huggingface.co/Tensoic/Llama-2-7B-alpaca-2k-test-merged
Visit us at: https://tensoic.com

## Training Setup:
```
Number of GPUs: 8x NVIDIA V100 GPUs
GPU Memory: 32GB each (SXM2 form factor)
```
## Training Configuration:
```yaml
base_model: meta-llama/Llama-2-7b-hf
base_model_config: meta-llama/Llama-2-7b-hf
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
is_llama_derived_model: true
load_in_8bit: true
load_in_4bit: false
strict: false
datasets:
- path: mhenrichsen/alpaca_2k_test
type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.01
output_dir: ./lora-out
sequence_len: 4096
sample_packing: false
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_run_id:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 3
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: false
fp16: true
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention: true
flash_attention: false
warmup_steps: 10
eval_steps: 20
save_steps:
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
```
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
```
### Framework versions
- PEFT 0.6.0.dev0
|
language-ml-lab/AzerBert
|
language-ml-lab
| 2023-09-29T16:20:11Z | 135 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"az",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-21T09:35:45Z |
---
pipeline_tag: fill-mask
widget:
- text: سن نجورسن [MASK]
example_title: Example 1
- text: بو [MASK] کتابی ده.
example_title: Example 2
- text: دیل [MASK] اؤنملی دیر.
example_title: Example 3
language:
- az
metrics:
- perplexity
---
# AzerBERT
- Type: BERT-based language model transformer
- Description: AzerBERT is a pre-trained language model specifically tailored for the Iranian Azerbaijani language. It can be used for various NLP tasks, including text classification, named entity recognition, and more.
## How to use
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("fill-mask", model="language-ml-lab/AzerBert")
```
```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("language-ml-lab/AzerBert")
model = AutoModelForMaskedLM.from_pretrained("language-ml-lab/AzerBert")
```
|
flytech/Ruckus-13b-30
|
flytech
| 2023-09-29T15:52:12Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:finetune:meta-llama/Llama-2-13b-hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-29T15:32:16Z |
---
base_model: meta-llama/Llama-2-13b-hf
tags:
- generated_from_trainer
model-index:
- name: Ruckus-13b-30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Ruckus-13b-30
This model is a fine-tuned version of [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
cloudwalkerw/wavlm-base_4
|
cloudwalkerw
| 2023-09-29T15:43:07Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wavlm",
"audio-classification",
"generated_from_trainer",
"base_model:microsoft/wavlm-base",
"base_model:finetune:microsoft/wavlm-base",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-09-28T17:04:51Z |
---
base_model: microsoft/wavlm-base
tags:
- audio-classification
- generated_from_trainer
metrics:
- f1
model-index:
- name: wavlm-base_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavlm-base_4
This model is a fine-tuned version of [microsoft/wavlm-base](https://huggingface.co/microsoft/wavlm-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3325
- F1: 0.9459
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 2
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3784 | 0.25 | 100 | 0.0784 | 0.9906 |
| 0.1125 | 0.5 | 200 | 0.0638 | 0.9925 |
| 0.1158 | 0.76 | 300 | 0.1716 | 0.9773 |
| 0.327 | 1.01 | 400 | 0.3308 | 0.9459 |
| 0.3346 | 1.26 | 500 | 0.3449 | 0.9459 |
| 0.3345 | 1.51 | 600 | 0.3316 | 0.9459 |
| 0.3313 | 1.76 | 700 | 0.3320 | 0.9459 |
| 0.3249 | 2.02 | 800 | 0.3327 | 0.9459 |
| 0.3403 | 2.27 | 900 | 0.3315 | 0.9459 |
| 0.3345 | 2.52 | 1000 | 0.3382 | 0.9459 |
| 0.3174 | 2.77 | 1100 | 0.3376 | 0.9459 |
| 0.3274 | 3.02 | 1200 | 0.3354 | 0.9459 |
| 0.3296 | 3.28 | 1300 | 0.3307 | 0.9459 |
| 0.3175 | 3.53 | 1400 | 0.3341 | 0.9459 |
| 0.3416 | 3.78 | 1500 | 0.3344 | 0.9459 |
| 0.3412 | 4.03 | 1600 | 0.3308 | 0.9459 |
| 0.3293 | 4.28 | 1700 | 0.3314 | 0.9459 |
| 0.3346 | 4.54 | 1800 | 0.3308 | 0.9459 |
| 0.3279 | 4.79 | 1900 | 0.3317 | 0.9459 |
| 0.3246 | 5.04 | 2000 | 0.3318 | 0.9459 |
| 0.3373 | 5.29 | 2100 | 0.3311 | 0.9459 |
| 0.3262 | 5.55 | 2200 | 0.3335 | 0.9459 |
| 0.3279 | 5.8 | 2300 | 0.3326 | 0.9459 |
| 0.3298 | 6.05 | 2400 | 0.3323 | 0.9459 |
| 0.3397 | 6.3 | 2500 | 0.3311 | 0.9459 |
| 0.3312 | 6.55 | 2600 | 0.3386 | 0.9459 |
| 0.3291 | 6.81 | 2700 | 0.3317 | 0.9459 |
| 0.3146 | 7.06 | 2800 | 0.3323 | 0.9459 |
| 0.3296 | 7.31 | 2900 | 0.3313 | 0.9459 |
| 0.3367 | 7.56 | 3000 | 0.3317 | 0.9459 |
| 0.3232 | 7.81 | 3100 | 0.3318 | 0.9459 |
| 0.3314 | 8.07 | 3200 | 0.3325 | 0.9459 |
| 0.3201 | 8.32 | 3300 | 0.3323 | 0.9459 |
| 0.3301 | 8.57 | 3400 | 0.3347 | 0.9459 |
| 0.3268 | 8.82 | 3500 | 0.3325 | 0.9459 |
| 0.3361 | 9.07 | 3600 | 0.3321 | 0.9459 |
| 0.3395 | 9.33 | 3700 | 0.3313 | 0.9459 |
| 0.3231 | 9.58 | 3800 | 0.3319 | 0.9459 |
| 0.3197 | 9.83 | 3900 | 0.3326 | 0.9459 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.0.post302
- Datasets 2.14.5
- Tokenizers 0.13.3
|
reginaboateng/finnal_compacter_Bioasq_adapter
|
reginaboateng
| 2023-09-29T15:33:32Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"bert",
"adapterhub:biaoasq",
"dataset:bioasq7b",
"region:us"
] | null | 2023-09-29T15:33:30Z |
---
tags:
- bert
- adapterhub:biaoasq
- adapter-transformers
datasets:
- bioasq7b
---
# Adapter `reginaboateng/finnal_compacter_Bioasq_adapter` for allenai/scibert_scivocab_uncased
An [adapter](https://adapterhub.ml) for the `allenai/scibert_scivocab_uncased` model that was trained on the [biaoasq](https://adapterhub.ml/explore/biaoasq/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("allenai/scibert_scivocab_uncased")
adapter_name = model.load_adapter("reginaboateng/finnal_compacter_Bioasq_adapter", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
Takagi-san/SaProt_650M_PDB
|
Takagi-san
| 2023-09-29T15:20:39Z | 104 | 1 |
transformers
|
[
"transformers",
"pytorch",
"esm",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-29T11:30:57Z |
---
license: mit
---
We provide both huggingface version and
[esm version](https://github.com/facebookresearch/esm) of
SaProt (see our github <https://github.com/SaProt/SaProt>). Users can choose either one to use.
### Huggingface model
The following code shows how to load the model.
```
from transformers import EsmTokenizer, EsmForMaskedLM
model_path = "/your/path/to/SaProt_650M_PDB"
tokenizer = EsmTokenizer.from_pretrained(model_path)
model = EsmForMaskedLM.from_pretrained(model_path)
#################### Example ####################
device = "cuda"
model.to(device)
seq = "MdEvVpQpLrVyQdYaKv"
tokens = tokenizer.tokenize(seq)
print(tokens)
inputs = tokenizer(seq, return_tensors="pt")
inputs = {k: v.to(device) for k, v in inputs.items()}
outputs = model(**inputs)
print(outputs.logits.shape)
"""
['Md', 'Ev', 'Vp', 'Qp', 'Lr', 'Vy', 'Qd', 'Ya', 'Kv']
torch.Size([1, 11, 446])
"""
```
### esm model
The esm version is also stored in the same folder, named `SaProt_650M_AF2.pt`. We provide a function to load the model.
```
from utils.esm_loader import load_esm_saprot
model_path = "/your/path/to/SaProt_650M_PDB.pt"
model, alphabet = load_esm_saprot(model_path)
```
|
chats-bug/llama-2-13b-email-subject-finetuned
|
chats-bug
| 2023-09-29T15:13:23Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-28T10:17:57Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
rasta/distilbert-base-uncased-finetuned-fashion
|
rasta
| 2023-09-29T15:03:55Z | 112 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-09T07:49:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
base_model: distilbert-base-uncased
model-index:
- name: distilbert-base-uncased-finetuned-fashion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-fashion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on a munally created dataset in order to detect fashion (label_0) from non-fashion (label_1) items.
It achieves the following results on the evaluation set:
- Loss: 0.0809
- Accuracy: 0.98
- F1: 0.9801
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4017 | 1.0 | 47 | 0.1220 | 0.966 | 0.9662 |
| 0.115 | 2.0 | 94 | 0.0809 | 0.98 | 0.9801 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
RogerB/afro-xlmr-large-kinyarwanda-finetuned
|
RogerB
| 2023-09-29T14:57:02Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:Davlan/afro-xlmr-large",
"base_model:finetune:Davlan/afro-xlmr-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-28T09:56:43Z |
---
license: mit
base_model: Davlan/afro-xlmr-large
tags:
- generated_from_trainer
model-index:
- name: afro-xlmr-large-kinyarwanda-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-large-kinyarwanda-finetuned
This model is a fine-tuned version of [Davlan/afro-xlmr-large](https://huggingface.co/Davlan/afro-xlmr-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3557 | 1.0 | 1250 | 1.2004 |
| 1.2352 | 2.0 | 2500 | 1.1377 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
openaccess-ai-collective/tiny-mistral
|
openaccess-ai-collective
| 2023-09-29T14:50:37Z | 17,213 | 12 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-28T15:10:32Z |
mistral architecture model, randomly initialized. useful for e2e testing.
|
Vijish/alphamask
|
Vijish
| 2023-09-29T14:45:05Z | 3 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-09-29T14:00:14Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-Vijish/alphamask
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning.
|
gokuls/HBERTv1_emb_compress_48_L10_H768_A12
|
gokuls
| 2023-09-29T14:39:29Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"dataset:gokuls/wiki_book_corpus_complete_processed_bert_dataset",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-27T06:39:55Z |
---
tags:
- generated_from_trainer
datasets:
- gokuls/wiki_book_corpus_complete_processed_bert_dataset
metrics:
- accuracy
model-index:
- name: HBERTv1_emb_compress_48_L10_H768_A12
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: gokuls/wiki_book_corpus_complete_processed_bert_dataset
type: gokuls/wiki_book_corpus_complete_processed_bert_dataset
metrics:
- name: Accuracy
type: accuracy
value: 0.3705453911691882
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HBERTv1_emb_compress_48_L10_H768_A12
This model is a fine-tuned version of [](https://huggingface.co/) on the gokuls/wiki_book_corpus_complete_processed_bert_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1748
- Accuracy: 0.3705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 7.1074 | 0.08 | 10000 | 7.0838 | 0.0828 |
| 6.6784 | 0.16 | 20000 | 6.6795 | 0.1075 |
| 6.535 | 0.25 | 30000 | 6.5322 | 0.1192 |
| 6.4482 | 0.33 | 40000 | 6.4390 | 0.1267 |
| 6.3716 | 0.41 | 50000 | 6.3711 | 0.1324 |
| 6.3233 | 0.49 | 60000 | 6.3219 | 0.1351 |
| 6.2821 | 0.57 | 70000 | 6.2781 | 0.1383 |
| 6.251 | 0.66 | 80000 | 6.2431 | 0.1408 |
| 6.2159 | 0.74 | 90000 | 6.2111 | 0.1425 |
| 6.1838 | 0.82 | 100000 | 6.1774 | 0.1444 |
| 6.1338 | 0.9 | 110000 | 6.1349 | 0.1464 |
| 6.1022 | 0.98 | 120000 | 6.0939 | 0.1481 |
| 6.0194 | 1.07 | 130000 | 6.0080 | 0.1517 |
| 5.9309 | 1.15 | 140000 | 5.9199 | 0.1642 |
| 5.8593 | 1.23 | 150000 | 5.8326 | 0.1769 |
| 5.7093 | 1.31 | 160000 | 5.6659 | 0.2040 |
| 5.5018 | 1.39 | 170000 | 5.4433 | 0.2339 |
| 5.3036 | 1.47 | 180000 | 5.2292 | 0.2576 |
| 5.0629 | 1.56 | 190000 | 4.9895 | 0.2834 |
| 4.8311 | 1.64 | 200000 | 4.7638 | 0.3085 |
| 4.6239 | 1.72 | 210000 | 4.5799 | 0.3278 |
| 4.4305 | 1.8 | 220000 | 4.3821 | 0.3471 |
| 4.2209 | 1.88 | 230000 | 4.1749 | 0.3704 |
### Framework versions
- Transformers 4.33.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.13.3
|
gokuls/bert_12_layer_model_v3_complete_training_new_emb_compress_48
|
gokuls
| 2023-09-29T14:35:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"dataset:gokuls/wiki_book_corpus_complete_processed_bert_dataset",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-26T17:23:09Z |
---
tags:
- generated_from_trainer
datasets:
- gokuls/wiki_book_corpus_complete_processed_bert_dataset
metrics:
- accuracy
model-index:
- name: bert_12_layer_model_v3_complete_training_new_emb_compress_48
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: gokuls/wiki_book_corpus_complete_processed_bert_dataset
type: gokuls/wiki_book_corpus_complete_processed_bert_dataset
metrics:
- name: Accuracy
type: accuracy
value: 0.1573752894874488
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_12_layer_model_v3_complete_training_new_emb_compress_48
This model is a fine-tuned version of [](https://huggingface.co/) on the gokuls/wiki_book_corpus_complete_processed_bert_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 5.9594
- Accuracy: 0.1574
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 7.1148 | 0.08 | 10000 | 7.0921 | 0.0828 |
| 6.6864 | 0.16 | 20000 | 6.6879 | 0.1078 |
| 6.5451 | 0.25 | 30000 | 6.5435 | 0.1184 |
| 6.4606 | 0.33 | 40000 | 6.4515 | 0.1262 |
| 6.3851 | 0.41 | 50000 | 6.3851 | 0.1312 |
| 6.3371 | 0.49 | 60000 | 6.3357 | 0.1342 |
| 6.2971 | 0.57 | 70000 | 6.2923 | 0.1373 |
| 6.2682 | 0.66 | 80000 | 6.2605 | 0.1399 |
| 6.2352 | 0.74 | 90000 | 6.2301 | 0.1411 |
| 6.214 | 0.82 | 100000 | 6.2090 | 0.1430 |
| 6.1837 | 0.9 | 110000 | 6.1865 | 0.1443 |
| 6.1726 | 0.98 | 120000 | 6.1682 | 0.1451 |
| 6.1524 | 1.07 | 130000 | 6.1498 | 0.1464 |
| 6.1293 | 1.15 | 140000 | 6.1300 | 0.1468 |
| 6.1116 | 1.23 | 150000 | 6.1026 | 0.1479 |
| 6.0839 | 1.31 | 160000 | 6.0797 | 0.1490 |
| 6.0616 | 1.39 | 170000 | 6.0590 | 0.1499 |
| 6.0508 | 1.47 | 180000 | 6.0399 | 0.1509 |
| 6.0311 | 1.56 | 190000 | 6.0233 | 0.1517 |
| 6.015 | 1.64 | 200000 | 6.0048 | 0.1533 |
| 5.985 | 1.72 | 210000 | 5.9863 | 0.1547 |
| 5.9661 | 1.8 | 220000 | 5.9595 | 0.1573 |
### Framework versions
- Transformers 4.33.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.13.3
|
gianpag/dbooth
|
gianpag
| 2023-09-29T14:26:23Z | 3 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-09-28T13:10:13Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Professional linkedin headshot photo
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
Undi95/Synthia-7B-v1.3-GGUF
|
Undi95
| 2023-09-29T14:19:33Z | 45 | 11 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2023-09-28T22:46:06Z |
This is a GGUF quant of https://huggingface.co/migtissera/Synthia-7B-v1.3
If you want to support me, you can [here](https://ko-fi.com/undiai).
# Synthia v1.3
SynthIA (Synthetic Intelligent Agent) v1.3 is a Mistral-7B model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations.
To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message:
`Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.`
All Synthia models are uncensored. Please use it with caution and with best intentions. You are responsible for how you use Synthia.
## Training Details
This was trained with QLoRA, as with all my models. Learning rate was 3e-4, 4096 context length. Batch size was 64, trained on a single H100.
Synthia-v1.2 dataset, which contain Chain-of-Thought (Orca), Tree-of-Thought and Long-Form conversation data.
Dataset is super high quality, and not a massive dataset (about ~125K samples).
## License Disclaimer:
This model is bound by the license & usage restrictions of the original Mistral model, and comes with no warranty or guarantees of any kind.
|
qiragg/tinytext-ds_3epoch
|
qiragg
| 2023-09-29T14:11:17Z | 139 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-29T05:45:41Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: tinytext-ds_3epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinytext-ds_3epoch
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.9109 | 1.32 | 5000 | 4.2160 |
| 3.9828 | 2.63 | 10000 | 3.9047 |
| 3.6341 | 3.95 | 15000 | 3.7160 |
| 3.3171 | 5.26 | 20000 | 3.6406 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
niklasg/test_emotion_detection_gersti
|
niklasg
| 2023-09-29T14:09:25Z | 10 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:generator",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-15T15:44:08Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- generator
metrics:
- accuracy
- f1
model-index:
- name: test_emotion_detection_gersti
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: generator
type: generator
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5371057513914657
- name: F1
type: f1
value: 0.14268320711165708
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_emotion_detection_gersti
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6884
- Accuracy: 0.5371
- F1: 0.1427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
alexbuyan/yt_videos_comments
|
alexbuyan
| 2023-09-29T14:00:30Z | 144 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-16T20:12:46Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: yt_videos_comments
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yt_videos_comments
This model is a fine-tuned version of [gpt2-large](https://huggingface.co/gpt2-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0918
- Accuracy: 0.6277
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1201 | 1.53 | 500 | 2.1152 | 0.6220 |
| 2.016 | 3.07 | 1000 | 2.0957 | 0.6254 |
| 1.9383 | 4.6 | 1500 | 2.0898 | 0.6271 |
| 1.8823 | 6.14 | 2000 | 2.0918 | 0.6277 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0-rc1
- Datasets 2.11.0
- Tokenizers 0.13.3
|
chakochen/flan-t5-small-destination-inference
|
chakochen
| 2023-09-29T13:57:07Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-09-29T11:12:34Z |
---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-small-destination-inference
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-destination-inference
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1533
- Rouge1: 93.7111
- Rouge2: 0.0
- Rougel: 93.7462
- Rougelsum: 93.7462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 1.5338 | 1.0 | 5701 | 0.2460 | 89.4132 | 0.0 | 89.4395 | 89.4483 |
| 1.2443 | 2.0 | 11402 | 0.2024 | 90.8692 | 0.0 | 90.8868 | 90.8955 |
| 1.1477 | 3.0 | 17103 | 0.1810 | 91.8779 | 0.0 | 91.8954 | 91.8954 |
| 1.0878 | 4.0 | 22804 | 0.1693 | 92.5445 | 0.0 | 92.5621 | 92.5621 |
| 1.0495 | 5.0 | 28505 | 0.1609 | 93.3164 | 0.0 | 93.3427 | 93.3339 |
| 1.0178 | 6.0 | 34206 | 0.1556 | 93.4041 | 0.0 | 93.4216 | 93.4304 |
| 0.9981 | 7.0 | 39907 | 0.1542 | 93.6935 | 0.0 | 93.7286 | 93.7286 |
| 0.9848 | 8.0 | 45608 | 0.1533 | 93.7111 | 0.0 | 93.7462 | 93.7462 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
reginaboateng/final_compacter_pubmeqa
|
reginaboateng
| 2023-09-29T13:55:23Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"bert",
"adapterhub:pubmedqa",
"dataset:pubmedqa",
"region:us"
] | null | 2023-09-29T13:55:19Z |
---
tags:
- adapter-transformers
- bert
- adapterhub:pubmedqa
datasets:
- pubmedqa
---
# Adapter `reginaboateng/final_compacter_pubmeqa` for allenai/scibert_scivocab_uncased
An [adapter](https://adapterhub.ml) for the `allenai/scibert_scivocab_uncased` model that was trained on the [pubmedqa](https://adapterhub.ml/explore/pubmedqa/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("allenai/scibert_scivocab_uncased")
adapter_name = model.load_adapter("reginaboateng/final_compacter_pubmeqa", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
Irvanaja/Sovits.teio
|
Irvanaja
| 2023-09-29T13:54:52Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-09-29T13:54:52Z |
---
license: bigscience-openrail-m
---
|
Bushman78/Daganjourneyv1
|
Bushman78
| 2023-09-29T13:45:59Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-27T17:36:39Z |
---
license: creativeml-openrail-m
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nadinegp/llama2-qlora-finetunined-pharoh
|
Nadinegp
| 2023-09-29T13:33:00Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-29T13:32:52Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
Yntec/3Danimation
|
Yntec
| 2023-09-29T13:32:47Z | 375 | 10 |
diffusers
|
[
"diffusers",
"safetensors",
"Anime",
"Disney",
"3D",
"Lykon",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-29T12:47:37Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Anime
- Disney
- 3D
- Lykon
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
language:
- en
inference: true
---
# 3D Animation Diffusion
Original model page: https://civitai.com/models/118086/3d-animation-diffusion
Sample and prompt:

Cartoon Pretty CUTE Girl, DETAILED CHIBI EYES, ilya kuvshinov detailed legs, gorgeous detailed hair, high school, Magazine ad, iconic, 1949, sharp focus. visible brushstrokes By KlaysMoji and artgerm and Clay Mann and and leyendecker and simon cowell. By Dave Rapoza. Pretty CUTE girl.
|
LeoLM/leo-hessianai-13b-chat-bilingual
|
LeoLM
| 2023-09-29T13:16:56Z | 19 | 7 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"en",
"de",
"dataset:LeoLM/OpenSchnabeltier",
"dataset:OpenAssistant/OASST-DE",
"dataset:FreedomIntelligence/alpaca-gpt4-deutsch",
"dataset:FreedomIntelligence/evol-instruct-deutsch",
"dataset:LeoLM/German_Poems",
"dataset:LeoLM/German_Songs",
"dataset:garage-bAInd/Open-Platypus",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"dataset:bjoernp/oasst25-08-23-filtered",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-10T08:27:09Z |
---
datasets:
- LeoLM/OpenSchnabeltier
- OpenAssistant/OASST-DE
- FreedomIntelligence/alpaca-gpt4-deutsch
- FreedomIntelligence/evol-instruct-deutsch
- LeoLM/German_Poems
- LeoLM/German_Songs
- garage-bAInd/Open-Platypus
- WizardLM/WizardLM_evol_instruct_70k
- bjoernp/oasst25-08-23-filtered
language:
- en
- de
library_name: transformers
pipeline_tag: text-generation
---
# LAION LeoLM: **L**inguistically **E**nhanced **O**pen **L**anguage **M**odel
Meet LeoLM, the first open and commercially available German Foundation Language Model built on Llama-2.
Our models extend Llama-2's capabilities into German through continued pretraining on a large corpus of German-language and mostly locality specific text.
Thanks to a compute grant at HessianAI's new supercomputer **42**, we release two foundation models trained with 8k context length,
[`LeoLM/leo-hessianai-7b`](https://huggingface.co/LeoLM/leo-hessianai-7b) and [`LeoLM/leo-hessianai-13b`](https://huggingface.co/LeoLM/leo-hessianai-13b) under the [Llama-2 community license](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) (70b also coming soon! 👀).
With this release, we hope to bring a new wave of opportunities to German open-source and commercial LLM research and accelerate adoption.
Read our [blog post]() or our paper (preprint coming soon) for more details!
*A project by Björn Plüster and Christoph Schuhmann in collaboration with LAION and HessianAI.*
## LeoLM Chat
`LeoLM/leo-hessianai-13b-chat-bilingual` is a bilingual English-German chat model built on our foundation model `LeoLM/leo-hessianai-13b` and finetuned on a selection of German translateed instruction datasets and their English counterparts.
The model performs exceptionally well on writing, explanation and discussion tasks but struggles somewhat with math and advanced reasoning. See our MT-Bench scores:
```
{
"first_turn": 6.13125,
"second_turn": 4.88125,
"categories": {
"writing": 6.75,
"roleplay": 5.55,
"reasoning": 3.3,
"math": 2.25,
"coding": 3.9,
"extraction": 5.8,
"stem": 7.55,
"humanities": 8.95
},
"average": 5.50625
}
```
## Model Details
- **Finetuned from:** [LeoLM/leo-hessianai-13b](https://huggingface.co/LeoLM/leo-hessianai-13b)
- **Model type:** Causal decoder-only transformer language model
- **Language:** English and German
- **Demo:** [Web Demo]()
- **License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt)
- **Contact:** [LAION Discord](https://discord.com/invite/eq3cAMZtCC) or [Björn Plüster](mailto:[email protected])
## Use in 🤗Transformers
First install direct dependencies:
```
pip install transformers torch sentencepiece
```
If you want faster inference using flash-attention2, you need to install these dependencies:
```bash
pip install packaging ninja
pip install flash-attn==v2.1.1 --no-build-isolation
pip install git+https://github.com/HazyResearch/[email protected]#subdirectory=csrc/rotary
```
Then load the model in transformers:
```python
from transformers import pipeline
import torch
system_prompt = """<|im_start|>system
Dies ist eine Unterhaltung zwischen einem intelligenten, hilfsbereitem KI-Assistenten und einem Nutzer.
Der Assistent gibt ausführliche, hilfreiche und ehrliche Antworten.<|im_end|>
"""
prompt_format = "<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
prompt = "Erkläre mir wie die Fahrradwegesituation in Hamburg ist."
generator = pipeline(model="LeoLM/leo-hessianai-13b-chat-bilingual", device="cuda", torch_dtype=torch.float16, trust_remote_code=True) # True for flash-attn2 else False
print(generator(prompt_format.format(prompt=prompt), do_sample=True, top_p=0.95, max_length=8192))
```
"*Als KI kann ich keine persönlichen Beobachtungen teilen, aber ich kann einige allgemeine Informationen zur Fahrradwegesituation in Hamburg liefern. Im Vergleich zu vielen anderen großen Städten hat Hamburg eine hohe Anzahl von Fahrradfahrern und nimmt seine Verantwortung für nachhaltige Verkehrsmittel sehr ernst. Es gibt viele Fahrradwege und separate Fahrspuren, die Radfahrern ermöglichen, zusammen mit dem Autoverkehr zu fahren. Diese Fahrradspuren sind oft mit Markierungen gekennzeichnet und durch physische Trennungen von anderen Fahrspuren abgegrenzt. Darüber hinaus gibt es viele Fahrradstraßen, auf denen Radfahrer Vorfahrt haben und Autos langsamer fahren müssen.*
*In einigen städtischen Gebieten können Fahrradwege jedoch eng oder überfüllt sein, besonders während der Stoßzeiten. Es gibt auch viele Kreuzungen, an denen Radfahrer anhalten und auf Grün warten müssen, ähnlich wie Autofahrer. Insgesamt ist die Fahrradinfrastruktur in Hamburg ziemlich gut, aber wie überall gibt es immer Raum für Verbesserungen.*"
## Prompting / Prompt Template
Prompt dialogue template (ChatML format):
```
"""
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
"""
```
The model input can contain multiple conversation turns between user and assistant, e.g.
```
<|im_start|>user
{prompt 1}<|im_end|>
<|im_start|>assistant
{reply 1}<|im_end|>
<|im_start|>user
{prompt 2}<|im_end|>
<|im_start|>assistant
(...)
```
## Ethical Considerations and Limitations
LeoLM has been tested in English and German, and has not covered, nor could it cover all scenarios.
For these reasons, as with all LLMs, the potential outputs of `LeoLM/leo-hessianai-7b-chat` cannot be predicted
in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses
to user prompts. Therefore, before deploying any applications of `LeoLM/leo-hessianai-7b-chat`, developers should
perform safety testing and tuning tailored to their specific applications of the model.
Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/).
## Finetuning Details
| Hyperparameter | Value |
|---|---|
| Num epochs | 3 |
| Examples per epoch | 233275 |
| Global batch size | 256 |
| Learning rate | 3e-5 |
| Warmup steps | 100 |
| LR scheduler | Cosine |
| Adam betas | (0.9, 0.95) |
| Weight decay | 0.001 |
## Dataset Details
```
## Stats for 'Subset of LeoLM/OpenSchnabeltier' (21314 samples (100.0%))
-----------------
Accepted: 21314/21314 (100.0%)
Accepted tokens: 8134690
Skipped: 0 (0.0%)
Min tokens per sample: 25
Max tokens per sample: 1202
Avg tokens per sample: 381.65947264708643
-----------------
## Stats for 'Subset of garage-bAInd/Open-Platypus' (24427 samples (100.0%))
-----------------
Accepted: 24427/24427 (100.0%)
Accepted tokens: 9549043
Skipped: 0 (0.0%)
Min tokens per sample: 23
Max tokens per sample: 5054
Avg tokens per sample: 390.9216440823679
-----------------
## Stats for 'Subset of WizardLM/WizardLM_evol_instruct_70k' (68600 samples (100.0%))
-----------------
Accepted: 68600/68600 (100.0%)
Accepted tokens: 33045040
Skipped: 0 (0.0%)
Min tokens per sample: 18
Max tokens per sample: 11810
Avg tokens per sample: 481.7061224489796
-----------------
## Stats for 'Subset of FreedomIntelligence/evol-instruct-deutsch' (57841 samples (100.0%))
-----------------
Accepted: 57841/57841 (100.0%)
Accepted tokens: 42958192
Skipped: 0 (0.0%)
Min tokens per sample: 33
Max tokens per sample: 5507
Avg tokens per sample: 742.6944900675991
-----------------
## Stats for 'Subset of FreedomIntelligence/alpaca-gpt4-deutsch' (48969 samples (100.0%))
-----------------
Accepted: 48969/48969 (100.0%)
Accepted tokens: 13372005
Skipped: 0 (0.0%)
Min tokens per sample: 19
Max tokens per sample: 1359
Avg tokens per sample: 273.07082031489307
-----------------
## Stats for 'Subset of LeoLM/German_Songs' (490 samples (100.0%))
-----------------
Accepted: 490/490 (100.0%)
Accepted tokens: 618642
Skipped: 0 (0.0%)
Min tokens per sample: 747
Max tokens per sample: 1678
Avg tokens per sample: 1262.534693877551
-----------------
## Stats for 'Subset of LeoLM/German_Poems' (392 samples (100.0%))
-----------------
Accepted: 392/392 (100.0%)
Accepted tokens: 187897
Skipped: 0 (0.0%)
Min tokens per sample: 231
Max tokens per sample: 826
Avg tokens per sample: 479.3290816326531
-----------------
## Stats for 'Subset of OpenAssistant/OASST_DE' (3646 samples (100.0%))
-----------------
Accepted: 3646/3646 (100.0%)
Accepted tokens: 2338738
Skipped: 0 (0.0%)
Min tokens per sample: 29
Max tokens per sample: 2484
Avg tokens per sample: 641.4530992868897
-----------------
## Stats for 'Subset of bjoernp/oasst25-08-23-filtered' (8922 samples (100.0%))
-----------------
Accepted: 8922/8922 (100.0%)
Accepted tokens: 4526427
Skipped: 0 (0.0%)
Min tokens per sample: 23
Max tokens per sample: 5407
Avg tokens per sample: 507.3332212508406
-----------------
## Stats for 'total' (235632 samples (100.0%))
-----------------
Accepted: 235632/235632 (100.0%)
Accepted tokens: 115862397
Skipped: 0 (0.0%)
Min tokens per sample: 18
Max tokens per sample: 11810
Avg tokens per sample: 491.70909299246284
-----------------
```
|
LeoLM/leo-hessianai-7b-chat-bilingual
|
LeoLM
| 2023-09-29T13:16:38Z | 1,463 | 7 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"en",
"de",
"dataset:LeoLM/OpenSchnabeltier",
"dataset:OpenAssistant/OASST-DE",
"dataset:FreedomIntelligence/alpaca-gpt4-deutsch",
"dataset:FreedomIntelligence/evol-instruct-deutsch",
"dataset:LeoLM/German_Poems",
"dataset:LeoLM/German_Songs",
"dataset:garage-bAInd/Open-Platypus",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"dataset:bjoernp/oasst25-08-23-filtered",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-10T19:00:52Z |
---
datasets:
- LeoLM/OpenSchnabeltier
- OpenAssistant/OASST-DE
- FreedomIntelligence/alpaca-gpt4-deutsch
- FreedomIntelligence/evol-instruct-deutsch
- LeoLM/German_Poems
- LeoLM/German_Songs
- garage-bAInd/Open-Platypus
- WizardLM/WizardLM_evol_instruct_70k
- bjoernp/oasst25-08-23-filtered
language:
- en
- de
library_name: transformers
pipeline_tag: text-generation
---
# LAION LeoLM: **L**inguistically **E**nhanced **O**pen **L**anguage **M**odel
Meet LeoLM, the first open and commercially available German Foundation Language Model built on Llama-2.
Our models extend Llama-2's capabilities into German through continued pretraining on a large corpus of German-language and mostly locality specific text.
Thanks to a compute grant at HessianAI's new supercomputer **42**, we release two foundation models trained with 8k context length,
[`LeoLM/leo-hessianai-7b`](https://huggingface.co/LeoLM/leo-hessianai-7b) and [`LeoLM/leo-hessianai-13b`](https://huggingface.co/LeoLM/leo-hessianai-13b) under the [Llama-2 community license](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) (70b also coming soon! 👀).
With this release, we hope to bring a new wave of opportunities to German open-source and commercial LLM research and accelerate adoption.
Read our [blog post]() or our paper (preprint coming soon) for more details!
*A project by Björn Plüster and Christoph Schuhmann in collaboration with LAION and HessianAI.*
## LeoLM Chat
`LeoLM/leo-hessianai-7b-chat-bilingual` is a bilingual English-German chat model built on our foundation model `LeoLM/leo-hessianai-7b` and finetuned on a selection of German translateed instruction datasets and their English counterparts.
The model performs exceptionally well on writing, explanation and discussion tasks but struggles somewhat with math and advanced reasoning. See our MT-Bench scores:
```
{
"first_turn": 5.64375,
"second_turn": 4.075,
"categories": {
"writing": 5.925,
"roleplay": 5.25,
"reasoning": 3.1,
"math": 1.8,
"coding": 3.4,
"extraction": 5,
"stem": 6.5,
"humanities": 7.9
},
"average": 4.859375
}
```
## Model Details
- **Finetuned from:** [LeoLM/leo-hessianai-7b](https://huggingface.co/LeoLM/leo-hessianai-7b)
- **Model type:** Causal decoder-only transformer language model
- **Language:** English and German
- **Demo:** [Web Demo]()
- **License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt)
- **Contact:** [LAION Discord](https://discord.com/invite/eq3cAMZtCC) or [Björn Plüster](mailto:[email protected])
## Use in 🤗Transformers
First install direct dependencies:
```
pip install transformers torch sentencepiece
```
If you want faster inference using flash-attention2, you need to install these dependencies:
```bash
pip install packaging ninja
pip install flash-attn==v2.1.1 --no-build-isolation
pip install git+https://github.com/HazyResearch/[email protected]#subdirectory=csrc/rotary
```
Then load the model in transformers:
```python
from transformers import pipeline
import torch
system_prompt = """<|im_start|>system
Dies ist eine Unterhaltung zwischen einem intelligenten, hilfsbereitem KI-Assistenten und einem Nutzer.
Der Assistent gibt ausführliche, hilfreiche und ehrliche Antworten.<|im_end|>
"""
prompt_format = "<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
prompt = "Erkläre mir wie die Fahrradwegesituation in Hamburg ist."
generator = pipeline(model="LeoLM/leo-hessianai-7b-chat-bilingual", device="cuda", torch_dtype=torch.float16, trust_remote_code=True) # True for flash-attn2 else False
print(generator(prompt_format.format(prompt=prompt), do_sample=True, top_p=0.95, max_length=8192))
```
"*Als KI kann ich keine persönlichen Beobachtungen teilen, aber ich kann einige allgemeine Informationen zur Fahrradwegesituation in Hamburg liefern. Im Vergleich zu vielen anderen großen Städten hat Hamburg eine hohe Anzahl von Fahrradfahrern und nimmt seine Verantwortung für nachhaltige Verkehrsmittel sehr ernst. Es gibt viele Fahrradwege und separate Fahrspuren, die Radfahrern ermöglichen, zusammen mit dem Autoverkehr zu fahren. Diese Fahrradspuren sind oft mit Markierungen gekennzeichnet und durch physische Trennungen von anderen Fahrspuren abgegrenzt. Darüber hinaus gibt es viele Fahrradstraßen, auf denen Radfahrer Vorfahrt haben und Autos langsamer fahren müssen.*
*In einigen städtischen Gebieten können Fahrradwege jedoch eng oder überfüllt sein, besonders während der Stoßzeiten. Es gibt auch viele Kreuzungen, an denen Radfahrer anhalten und auf Grün warten müssen, ähnlich wie Autofahrer. Insgesamt ist die Fahrradinfrastruktur in Hamburg ziemlich gut, aber wie überall gibt es immer Raum für Verbesserungen.*"
## Prompting / Prompt Template
Prompt dialogue template (ChatML format):
```
"""
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
"""
```
The model input can contain multiple conversation turns between user and assistant, e.g.
```
<|im_start|>user
{prompt 1}<|im_end|>
<|im_start|>assistant
{reply 1}<|im_end|>
<|im_start|>user
{prompt 2}<|im_end|>
<|im_start|>assistant
(...)
```
## Ethical Considerations and Limitations
LeoLM has been tested in English and German, and has not covered, nor could it cover all scenarios.
For these reasons, as with all LLMs, the potential outputs of `LeoLM/leo-hessianai-7b-chat` cannot be predicted
in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses
to user prompts. Therefore, before deploying any applications of `LeoLM/leo-hessianai-7b-chat`, developers should
perform safety testing and tuning tailored to their specific applications of the model.
Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/).
## Finetuning Details
| Hyperparameter | Value |
|---|---|
| Num epochs | 3 |
| Examples per epoch | 233275 |
| Global batch size | 256 |
| Learning rate | 3e-5 |
| Warmup steps | 100 |
| LR scheduler | Cosine |
| Adam betas | (0.9, 0.95) |
| Weight decay | 0.001 |
## Dataset Details
```
## Stats for 'Subset of LeoLM/OpenSchnabeltier' (21314 samples (100.0%))
-----------------
Accepted: 21314/21314 (100.0%)
Accepted tokens: 8134690
Skipped: 0 (0.0%)
Min tokens per sample: 25
Max tokens per sample: 1202
Avg tokens per sample: 381.65947264708643
-----------------
## Stats for 'Subset of garage-bAInd/Open-Platypus' (24427 samples (100.0%))
-----------------
Accepted: 24427/24427 (100.0%)
Accepted tokens: 9549043
Skipped: 0 (0.0%)
Min tokens per sample: 23
Max tokens per sample: 5054
Avg tokens per sample: 390.9216440823679
-----------------
## Stats for 'Subset of WizardLM/WizardLM_evol_instruct_70k' (68600 samples (100.0%))
-----------------
Accepted: 68600/68600 (100.0%)
Accepted tokens: 33045040
Skipped: 0 (0.0%)
Min tokens per sample: 18
Max tokens per sample: 11810
Avg tokens per sample: 481.7061224489796
-----------------
## Stats for 'Subset of FreedomIntelligence/evol-instruct-deutsch' (57841 samples (100.0%))
-----------------
Accepted: 57841/57841 (100.0%)
Accepted tokens: 42958192
Skipped: 0 (0.0%)
Min tokens per sample: 33
Max tokens per sample: 5507
Avg tokens per sample: 742.6944900675991
-----------------
## Stats for 'Subset of FreedomIntelligence/alpaca-gpt4-deutsch' (48969 samples (100.0%))
-----------------
Accepted: 48969/48969 (100.0%)
Accepted tokens: 13372005
Skipped: 0 (0.0%)
Min tokens per sample: 19
Max tokens per sample: 1359
Avg tokens per sample: 273.07082031489307
-----------------
## Stats for 'Subset of LeoLM/German_Songs' (490 samples (100.0%))
-----------------
Accepted: 490/490 (100.0%)
Accepted tokens: 618642
Skipped: 0 (0.0%)
Min tokens per sample: 747
Max tokens per sample: 1678
Avg tokens per sample: 1262.534693877551
-----------------
## Stats for 'Subset of LeoLM/German_Poems' (392 samples (100.0%))
-----------------
Accepted: 392/392 (100.0%)
Accepted tokens: 187897
Skipped: 0 (0.0%)
Min tokens per sample: 231
Max tokens per sample: 826
Avg tokens per sample: 479.3290816326531
-----------------
## Stats for 'Subset of OpenAssistant/OASST_DE' (3646 samples (100.0%))
-----------------
Accepted: 3646/3646 (100.0%)
Accepted tokens: 2338738
Skipped: 0 (0.0%)
Min tokens per sample: 29
Max tokens per sample: 2484
Avg tokens per sample: 641.4530992868897
-----------------
## Stats for 'Subset of bjoernp/oasst25-08-23-filtered' (8922 samples (100.0%))
-----------------
Accepted: 8922/8922 (100.0%)
Accepted tokens: 4526427
Skipped: 0 (0.0%)
Min tokens per sample: 23
Max tokens per sample: 5407
Avg tokens per sample: 507.3332212508406
-----------------
## Stats for 'total' (235632 samples (100.0%))
-----------------
Accepted: 235632/235632 (100.0%)
Accepted tokens: 115862397
Skipped: 0 (0.0%)
Min tokens per sample: 18
Max tokens per sample: 11810
Avg tokens per sample: 491.70909299246284
-----------------
```
|
Omid-sar/fine-tuning-llama2-7b-qlora-french
|
Omid-sar
| 2023-09-29T13:16:37Z | 6 | 1 |
peft
|
[
"peft",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-09-18T20:44:17Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
Fine-tuning Llama-2-7b using QLoRA in French on Google Colab
## Goal
The goal of this project is to adapt the Llama-2-7b model, which initially might not have proficiency in French, to understand and respond accurately to queries in the French language. This adaptation involves fine-tuning the model on a dataset of French novels, allowing it to comprehend the nuances, syntax, and semantics of the French language. By leveraging the PEFT library from the Hugging Face ecosystem and QLoRA for more memory-efficient fine-tuning on a single T4 GPU provided by Google Colab, we aim to create a chatbot that can effectively answer questions posed in French.
## Overview
This project involves several steps including setting up the environment, loading the dataset and model, configuring QLoRA and training parameters, training the model, and finally testing and pushing the fine-tuned model to Hugging Face.
## Features
- **Dataset Loading**: Load and process a French novels dataset using Hugging Face datasets library.
- **Model Quantization**: Quantize the base Llama-2-7b model into 4-bit using bitsandbytes.
- **Configuration for QLoRA**: Apply the QLoRA configuration for more memory-efficient fine-tuning using the PEFT library.
- **Training**: Use the SFTTrainer from the TRL library for instruction-based fine-tuning.
- **Testing and Pushing to Hugging Face**: Test the fine-tuned model and push it to Hugging Face.
## Prerequisites
- Google Colab with T4 GPU
- Python libraries: trl, transformers, accelerate, peft, datasets, bitsandbytes, einops
-
|
Tiabet/kogpt2-base-v2-finetuned-koGPT-complete_story-finetuned-koGPT-complete_story
|
Tiabet
| 2023-09-29T13:09:12Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"text generation",
"generated_from_trainer",
"base_model:Tiabet/kogpt2-base-v2-finetuned-koGPT-complete_story",
"base_model:finetune:Tiabet/kogpt2-base-v2-finetuned-koGPT-complete_story",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-29T07:47:57Z |
---
license: cc-by-nc-sa-4.0
base_model: Tiabet/kogpt2-base-v2-finetuned-koGPT-complete_story
tags:
- text generation
- generated_from_trainer
model-index:
- name: kogpt2-base-v2-finetuned-koGPT-complete_story-finetuned-koGPT-complete_story
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kogpt2-base-v2-finetuned-koGPT-complete_story-finetuned-koGPT-complete_story
This model is a fine-tuned version of [Tiabet/kogpt2-base-v2-finetuned-koGPT-complete_story](https://huggingface.co/Tiabet/kogpt2-base-v2-finetuned-koGPT-complete_story) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2092
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.2518 | 1.0 | 3755 | 2.9790 |
| 3.8162 | 2.0 | 7510 | 2.9607 |
| 3.2811 | 3.0 | 11265 | 3.2092 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
domischwimmbeck/bert-base-german-cased-20000-ner-uncased
|
domischwimmbeck
| 2023-09-29T12:51:17Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:dbmdz/bert-base-german-uncased",
"base_model:finetune:dbmdz/bert-base-german-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-20T13:36:50Z |
---
license: mit
base_model: dbmdz/bert-base-german-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-20000-ner-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-20000-ner-uncased
This model is a fine-tuned version of [dbmdz/bert-base-german-uncased](https://huggingface.co/dbmdz/bert-base-german-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0617
- Precision: 0.8871
- Recall: 0.9013
- F1: 0.8941
- Accuracy: 0.9848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.34 | 64 | 0.0573 | 0.8859 | 0.8526 | 0.8689 | 0.9837 |
| No log | 0.68 | 128 | 0.0654 | 0.8107 | 0.8957 | 0.8511 | 0.9808 |
| No log | 1.02 | 192 | 0.0531 | 0.8654 | 0.8846 | 0.8749 | 0.9842 |
| No log | 1.35 | 256 | 0.0467 | 0.8847 | 0.8853 | 0.8850 | 0.9857 |
| No log | 1.69 | 320 | 0.0466 | 0.9102 | 0.8883 | 0.8992 | 0.9864 |
| No log | 2.03 | 384 | 0.0467 | 0.8794 | 0.8951 | 0.8872 | 0.9854 |
| No log | 2.37 | 448 | 0.0520 | 0.8864 | 0.9001 | 0.8932 | 0.9851 |
| 0.0531 | 2.71 | 512 | 0.0549 | 0.8894 | 0.8877 | 0.8885 | 0.9854 |
| 0.0531 | 3.05 | 576 | 0.0534 | 0.8942 | 0.8920 | 0.8931 | 0.9857 |
| 0.0531 | 3.39 | 640 | 0.0526 | 0.8917 | 0.8994 | 0.8956 | 0.9856 |
| 0.0531 | 3.72 | 704 | 0.0576 | 0.9049 | 0.8976 | 0.9012 | 0.9857 |
| 0.0531 | 4.06 | 768 | 0.0700 | 0.8529 | 0.9229 | 0.8865 | 0.9830 |
| 0.0531 | 4.4 | 832 | 0.0657 | 0.8716 | 0.9167 | 0.8936 | 0.9840 |
| 0.0531 | 4.74 | 896 | 0.0617 | 0.8871 | 0.9013 | 0.8941 | 0.9848 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
roa7n/gpt2-human_nontata_promoters-randomized_9_layers_3e-05_lr_2_e
|
roa7n
| 2023-09-29T12:49:06Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-29T12:49:03Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
alexisdpc/my_awesome_wnut_model
|
alexisdpc
| 2023-09-29T12:30:39Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-29T12:05:26Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_model
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.5716694772344013
- name: Recall
type: recall
value: 0.31417979610750696
- name: F1
type: f1
value: 0.4055023923444976
- name: Accuracy
type: accuracy
value: 0.9413877132230345
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2696
- Precision: 0.5717
- Recall: 0.3142
- F1: 0.4055
- Accuracy: 0.9414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.2756 | 0.5691 | 0.2632 | 0.3599 | 0.9389 |
| No log | 2.0 | 426 | 0.2696 | 0.5717 | 0.3142 | 0.4055 | 0.9414 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ayoubkirouane/BERT-base_NER-ar
|
ayoubkirouane
| 2023-09-29T12:19:39Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"ar",
"dataset:wikiann",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-29T11:24:24Z |
---
datasets:
- wikiann
language:
- ar
pipeline_tag: token-classification
---
## Model Name: BERT-base_NER-ar
### Model Description :
**BERT-base_NER-ar** is a fine-tuned **BERT** multilingual base model for Named Entity Recognition (NER) in Arabic. The base model was pretrained on a diverse set of languages and fine-tuned specifically for the task of NER using the "wikiann" dataset. This model is case-sensitive, distinguishing between different letter cases, such as "english" and "English."
### Dataset
The model was fine-tuned on the **wikiann** dataset, which is a multilingual named entity recognition dataset. It contains Wikipedia articles annotated with three types of named entities: LOC (location), PER (person), and ORG (organization). The annotations are in the IOB2 format. The dataset supports 176 of the 282 languages from the original WikiANN corpus.
### Supported Tasks and Leaderboards
The primary supported task for this model is named entity recognition (NER) in Arabic. However, it can also be used to explore the zero-shot cross-lingual capabilities of multilingual models, allowing for NER in various languages.
### Use Cases
+ **Arabic Named Entity Recognition**: *BERT-base_NER-ar* can be used to extract named entities (such as names of people, locations, and organizations) from Arabic text. This is valuable for information retrieval, text summarization, and content analysis in Arabic language applications.
+ **Multilingual NER**: The model's multilingual capabilities enable it to perform NER in other languages supported by the "wikiann" dataset, making it versatile for cross-lingual NER tasks.
### Limitations
+ **Language Limitation**: While the model supports multiple languages, it may not perform equally well in all of them. Performance could vary depending on the quality and quantity of training data available for specific languages.
+ **Fine-Tuning Data**: The model's performance is dependent on the quality and representativeness of the fine-tuning data (the "wikiann" dataset in this case). If the dataset is limited or biased, it may affect the model's performance.
## Usage :
```python
from transformers import AutoModelForTokenClassification, AutoTokenizer
import torch
# Load the fine-tuned model
model = AutoModelForTokenClassification.from_pretrained("ayoubkirouane/BERT-base_NER-ar")
tokenizer = AutoTokenizer.from_pretrained("ayoubkirouane/BERT-base_NER-ar")
# Tokenize your input text
text = "عاصمة فلسطين هي القدس الشريف."
tokens = tokenizer.tokenize(tokenizer.decode(tokenizer.encode(text)))
# Convert tokens to input IDs
input_ids = tokenizer.convert_tokens_to_ids(tokens)
# Perform NER inference
with torch.no_grad():
outputs = model(torch.tensor([input_ids]))
# Get the predicted labels for each token
predicted_labels = outputs[0].argmax(dim=2).cpu().numpy()[0]
# Map label IDs to human-readable labels
predicted_labels = [model.config.id2label[label_id] for label_id in predicted_labels]
# Print the tokenized text and its associated labels
for token, label in zip(tokens, predicted_labels):
print(f"Token: {token}, Label: {label}")
```
|
ldos/text_shortening_model_v64
|
ldos
| 2023-09-29T12:13:37Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-29T11:34:16Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: text_shortening_model_v64
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_shortening_model_v64
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3622
- Bert precision: 0.7381
- Bert recall: 0.7763
- Bert f1-score: 0.7541
- Average word count: 9.0345
- Max word count: 14
- Min word count: 2
- Average token count: 15.5862
- % shortened texts with length > 12: 20.6897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bert precision | Bert recall | Bert f1-score | Average word count | Max word count | Min word count | Average token count | % shortened texts with length > 12 |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:-----------:|:-------------:|:------------------:|:--------------:|:--------------:|:-------------------:|:----------------------------------:|
| 3.1461 | 1.0 | 5 | 2.3622 | 0.7381 | 0.7763 | 0.7541 | 9.0345 | 14 | 2 | 15.5862 | 20.6897 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
soBeauty/V2_20230929-10-xlm-roberta-base-new
|
soBeauty
| 2023-09-29T12:10:08Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-29T09:00:30Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: V2_20230929-10-xlm-roberta-base-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V2_20230929-10-xlm-roberta-base-new
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.5096
- Loss: 2.4163
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 4.2884 | 1.38 | 200 | 0.2958 | 4.0095 |
| 3.8429 | 2.76 | 400 | 0.3650 | 3.7295 |
| 3.5677 | 4.14 | 600 | 0.4377 | 3.2236 |
| 3.3967 | 5.52 | 800 | 0.4311 | 3.0356 |
| 3.3011 | 6.9 | 1000 | 0.4883 | 2.9507 |
| 3.073 | 8.28 | 1200 | 0.4906 | 2.7251 |
| 2.9435 | 9.66 | 1400 | 0.4484 | 2.9997 |
| 2.9574 | 11.03 | 1600 | 0.4580 | 2.6966 |
| 2.8692 | 12.41 | 1800 | 0.5356 | 2.5604 |
| 2.7694 | 13.79 | 2000 | 0.5096 | 2.4163 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
wkpark/mmyolo-yolov8
|
wkpark
| 2023-09-29T12:06:51Z | 0 | 1 |
ultralytics
|
[
"ultralytics",
"mmyolo",
"mmdetection",
"yolov8",
"license:agpl-3.0",
"region:us"
] | null | 2023-09-27T16:25:12Z |
---
license: agpl-3.0
tags:
- mmyolo
- ultralytics
- mmdetection
- yolov8
---
## Yolov8 models converted for MMYOLO models
Yolov8 models converted with mmyolo's conversion tool to use with MMDetection-based applications such as DDetailer.
- used converter: https://github.com/open-mmlab/mmyolo/tree/main/tools/model_converters
- original yolov8 models from https://huggingface.co/Bingsu/adetailer - author [@Bing-su](https://huggingface.co/Bingsu)
|
ostris/ikea-instructions-lora-sdxl
|
ostris
| 2023-09-29T11:55:15Z | 4,664 | 235 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"style",
"styles",
"instruction_manual",
"ikea",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-09-29T11:55:12Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- style
- styles
- instruction_manual
- ikea
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt:
widget:
- text: "where is waldo "
- text: "sleep"
- text: "hamburger,, lettuce, mayo, lettuce, no tomato "
- text: "barbie and ken "
- text: "back to the future "
- text: "the dude, form the movie the big lebowski, drinking, rug wet, bowling ball "
- text: "hippie "
- text: " fat man, eat pizza, eat hamburgers, drink beer"
- text: " dayman, fighter of the night man, champion of the sun"
- text: " nightman, pay troll toll to get into that boys hole"
---
# Ikea Instructions - LoRA - SDXL

> where is waldo
<p>No trigger word needed. Weight of 1.0 works well on the SDXL 1.0 base. Negatives are usually not needed, but "blurry" and "low quality" seem to help. You can use simple prompts such as "hamburger" or describe steps you want it to show. SDXL does a pretty good job of figuring out the steps to make it.</p>
## Image examples for the model:

> sleep

> hamburger,, lettuce, mayo, lettuce, no tomato

> barbie and ken

> back to the future

> the dude, form the movie the big lebowski, drinking, rug wet, bowling ball

> hippie

> fat man, eat pizza, eat hamburgers, drink beer

> dayman, fighter of the night man, champion of the sun

> nightman, pay troll toll to get into that boys hole
|
roa7n/gpt2-human_nontata_promoters-randomized_9_layers_0.003_lr_2_e
|
roa7n
| 2023-09-29T11:43:57Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-29T11:43:54Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
elenafr/bert-finetuned-squad
|
elenafr
| 2023-09-29T11:38:40Z | 133 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:deepset/bert-large-uncased-whole-word-masking-squad2",
"base_model:finetune:deepset/bert-large-uncased-whole-word-masking-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-09-28T16:22:23Z |
---
license: cc-by-4.0
base_model: deepset/bert-large-uncased-whole-word-masking-squad2
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
soBeauty/V2_20230929-8-xlm-roberta-base-new
|
soBeauty
| 2023-09-29T11:38:13Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-29T08:35:34Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: V2_20230929-8-xlm-roberta-base-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V2_20230929-8-xlm-roberta-base-new
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.5333
- Loss: 2.6271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 4.3526 | 1.38 | 200 | 0.2971 | 3.8765 |
| 3.8293 | 2.76 | 400 | 0.3692 | 3.3059 |
| 3.5091 | 4.14 | 600 | 0.4261 | 3.1166 |
| 3.382 | 5.52 | 800 | 0.4662 | 2.8632 |
| 3.1966 | 6.9 | 1000 | 0.4622 | 2.8866 |
| 3.1158 | 8.28 | 1200 | 0.4588 | 2.8542 |
| 2.9343 | 9.66 | 1400 | 0.4568 | 2.7541 |
| 2.8719 | 11.03 | 1600 | 0.4286 | 2.7540 |
| 2.8378 | 12.41 | 1800 | 0.5074 | 2.6573 |
| 2.8196 | 13.79 | 2000 | 0.5333 | 2.6271 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
tolpem/distilbert-base-uncased-finetuned-imdb
|
tolpem
| 2023-09-29T11:22:48Z | 71 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-29T11:17:44Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: tolpem/distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tolpem/distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8561
- Validation Loss: 2.5781
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.8561 | 2.5781 | 0 |
### Framework versions
- Transformers 4.33.3
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
soBeauty/V2_20230929-7-xlm-roberta-base-new
|
soBeauty
| 2023-09-29T11:21:10Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-29T08:23:11Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: V2_20230929-7-xlm-roberta-base-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V2_20230929-7-xlm-roberta-base-new
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.5234
- Loss: 2.6183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 4.3266 | 1.38 | 200 | 0.3622 | 3.6841 |
| 3.8917 | 2.76 | 400 | 0.4023 | 3.4532 |
| 3.533 | 4.14 | 600 | 0.4307 | 3.0664 |
| 3.3332 | 5.52 | 800 | 0.4532 | 2.9906 |
| 3.1976 | 6.9 | 1000 | 0.4440 | 3.0366 |
| 3.0943 | 8.28 | 1200 | 0.4545 | 3.0235 |
| 2.9444 | 9.66 | 1400 | 0.5699 | 2.2568 |
| 2.9067 | 11.03 | 1600 | 0.4538 | 2.9683 |
| 2.8052 | 12.41 | 1800 | 0.4820 | 2.6102 |
| 2.807 | 13.79 | 2000 | 0.5234 | 2.6183 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
roa7n/gpt2-human_nontata_promoters-randomized_8_layers_3e-05_lr_8_e
|
roa7n
| 2023-09-29T11:11:21Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-29T11:11:19Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
soBeauty/V2_20230929-6-xlm-roberta-base-new
|
soBeauty
| 2023-09-29T11:06:03Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-29T08:11:40Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: V2_20230929-6-xlm-roberta-base-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V2_20230929-6-xlm-roberta-base-new
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.4553
- Loss: 2.9349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 4.4437 | 1.38 | 200 | 0.2667 | 4.2562 |
| 3.9303 | 2.76 | 400 | 0.3571 | 3.6777 |
| 3.5316 | 4.14 | 600 | 0.3904 | 3.5382 |
| 3.2553 | 5.52 | 800 | 0.4615 | 3.1063 |
| 3.1387 | 6.9 | 1000 | 0.4494 | 2.9509 |
| 3.0595 | 8.28 | 1200 | 0.4506 | 2.9728 |
| 2.9643 | 9.66 | 1400 | 0.4380 | 2.8324 |
| 2.8917 | 11.03 | 1600 | 0.4667 | 2.8319 |
| 2.8581 | 12.41 | 1800 | 0.4681 | 2.9051 |
| 2.8575 | 13.79 | 2000 | 0.4553 | 2.9349 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
hardikcode/distilbert-base-uncased-finetuned-imdb
|
hardikcode
| 2023-09-29T10:53:21Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-29T10:50:07Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7024 | 1.0 | 157 | 2.4968 |
| 2.5794 | 2.0 | 314 | 2.4281 |
| 2.5354 | 3.0 | 471 | 2.4509 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
soBeauty/V2_20230929-5-xlm-roberta-base-new
|
soBeauty
| 2023-09-29T10:51:07Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-29T08:00:28Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: V2_20230929-5-xlm-roberta-base-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V2_20230929-5-xlm-roberta-base-new
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.5181
- Loss: 2.5292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 4.3451 | 1.38 | 200 | 0.3686 | 3.5221 |
| 3.8508 | 2.76 | 400 | 0.4402 | 3.2092 |
| 3.5934 | 4.14 | 600 | 0.3908 | 3.4233 |
| 3.1956 | 5.52 | 800 | 0.4317 | 3.3102 |
| 3.2828 | 6.9 | 1000 | 0.4704 | 2.9782 |
| 3.1068 | 8.28 | 1200 | 0.5019 | 2.6751 |
| 2.9976 | 9.66 | 1400 | 0.4493 | 3.0054 |
| 2.9072 | 11.03 | 1600 | 0.4189 | 3.0985 |
| 2.8663 | 12.41 | 1800 | 0.5385 | 2.4444 |
| 2.804 | 13.79 | 2000 | 0.5181 | 2.5292 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Schelle7/my_awesome_qa_model
|
Schelle7
| 2023-09-29T10:51:07Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-09-29T09:52:02Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 1.5508 |
| 0.9804 | 2.0 | 500 | 1.6304 |
| 0.9804 | 3.0 | 750 | 1.6812 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
soBeauty/V2_20230929-4-xlm-roberta-base-new
|
soBeauty
| 2023-09-29T10:36:13Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-29T07:48:49Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: V2_20230929-4-xlm-roberta-base-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V2_20230929-4-xlm-roberta-base-new
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.4980
- Loss: 2.6341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 4.4422 | 1.38 | 200 | 0.2888 | 4.2369 |
| 3.9018 | 2.76 | 400 | 0.3333 | 3.9767 |
| 3.5709 | 4.14 | 600 | 0.3669 | 3.5533 |
| 3.3829 | 5.52 | 800 | 0.3891 | 3.3396 |
| 3.2242 | 6.9 | 1000 | 0.4244 | 3.0648 |
| 3.0837 | 8.28 | 1200 | 0.4515 | 3.2200 |
| 2.9448 | 9.66 | 1400 | 0.4637 | 2.8563 |
| 2.8529 | 11.03 | 1600 | 0.4664 | 2.9343 |
| 2.8343 | 12.41 | 1800 | 0.4498 | 3.1041 |
| 2.813 | 13.79 | 2000 | 0.4980 | 2.6341 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
pembelajarff/moviereview-ds-mini
|
pembelajarff
| 2023-09-29T10:31:58Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-29T10:31:31Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_keras_callback
model-index:
- name: moviereview-ds-mini
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# moviereview-ds-mini
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.1821
- Validation Loss: 7.8696
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -887, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2500 | 9.5646 | 0 |
| 9.1560 | 8.7719 | 1 |
| 8.1821 | 7.8696 | 2 |
### Framework versions
- Transformers 4.33.3
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
pembelajarff/movie_review
|
pembelajarff
| 2023-09-29T10:30:02Z | 125 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-19T04:24:33Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_keras_callback
model-index:
- name: pembelajarff/movie_review
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pembelajarff/movie_review
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.1821
- Validation Loss: 7.8696
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -887, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2500 | 9.5646 | 0 |
| 9.1560 | 8.7719 | 1 |
| 8.1821 | 7.8696 | 2 |
### Framework versions
- Transformers 4.33.3
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Thenujan/ViT-H-14
|
Thenujan
| 2023-09-29T10:28:16Z | 2 | 0 |
open_clip
|
[
"open_clip",
"feature-extraction",
"en",
"license:other",
"region:us"
] |
feature-extraction
| 2023-08-29T12:51:04Z |
---
license: other
language:
- en
metrics:
- mape
library_name: open_clip
pipeline_tag: feature-extraction
---
|
pavithrav/distilbert-base-uncased-finetuned-emotion
|
pavithrav
| 2023-09-29T10:26:51Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-29T10:26:11Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2215
- Accuracy: 0.9235
- F1: 0.9236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8569 | 1.0 | 250 | 0.3312 | 0.901 | 0.8994 |
| 0.2561 | 2.0 | 500 | 0.2215 | 0.9235 | 0.9236 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Weyaxi/ChatAYT-Lora-Assamble-Marcoroni-v2
|
Weyaxi
| 2023-09-29T10:22:18Z | 20 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-14T07:43:32Z |
<a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
|
soBeauty/V2_20230929-3-xlm-roberta-base-new
|
soBeauty
| 2023-09-29T10:21:45Z | 157 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-29T07:37:05Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: V2_20230929-3-xlm-roberta-base-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V2_20230929-3-xlm-roberta-base-new
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.5378
- Loss: 2.2727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 4.3145 | 1.38 | 200 | 0.2955 | 3.8793 |
| 3.8469 | 2.76 | 400 | 0.3398 | 3.7082 |
| 3.4996 | 4.14 | 600 | 0.4110 | 3.1106 |
| 3.4055 | 5.52 | 800 | 0.3919 | 3.1465 |
| 3.1658 | 6.9 | 1000 | 0.4786 | 2.9087 |
| 3.1597 | 8.28 | 1200 | 0.4128 | 3.0067 |
| 2.9918 | 9.66 | 1400 | 0.4664 | 2.7497 |
| 2.8913 | 11.03 | 1600 | 0.4580 | 2.6409 |
| 2.8172 | 12.41 | 1800 | 0.4449 | 2.9132 |
| 2.9125 | 13.79 | 2000 | 0.5378 | 2.2727 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
tejp/human-actions
|
tejp
| 2023-09-29T10:13:22Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-29T09:42:49Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: human-actions
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# human-actions
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the Human_Action_Recognition dataset.
It achieves the following results on the evaluation set:
- Loss: 7.1747
- Accuracy: 0.0676
- F1: 0.0084
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3842 | 2.54 | 1000 | 7.1747 | 0.0676 | 0.0084 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
SakataHalmi/Reinforce-Pixelcopter-PLE-v0
|
SakataHalmi
| 2023-09-29T10:09:25Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-28T20:27:16Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 68.80 +/- 55.98
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.